ABSTRACTScheduling is one of the most important activities of the process manager which take decision to choose which of the process in the ready queue will be assigned to the CPU.
There are different types of scheduling algorithms available for taking decision. One of them is multilevel feedback queue which is the scheduling mechanism used in windows vista. MLFQ scheduling is used on preemptive basis with dynamic priority mechanism. When a process first enters a system, it is placed in first queue. When it returns to the ready state after its first execution, it is placed in second queue. After each subsequent execution, it goes to the next lower-priority queue. A shorter process will complete quickly without migrating very far down the hierarchy of ready queues while a longer process can be gradually drifted downward. Thus some processes in queues are used simple FIFO mechanism but few queues are treated as round robin (RR) approach.
The comparative analysis will be performed on the SJF,FIFO ,RR and Priority scheduling to compare the average waiting time and average turnaround time of CPU Scheduling and a decision will be made by the OS to figure out which scheduling algorithms is the best and the worst using the scheduling criteria. On the other hand, a microprocessor is the most important unit within a computer system and is responsible for processing the unique set of instructions and processes .Microprocessors are generally classified according to the number of instructions they can process within a given time, their clock speed measured in megahertz and the number of bits used per instruction. There are major trends that affect the microprocessor performance such as the clock speed, transistors, pipeline and the number of cores on the microprocessor and the work interdependent. For instance Microprocessor clock speed increases with faster transistors and longer pipelines.INTRODUCTIONThis paper endeavors to investigate two key areas that are Operating System where process control management is selected and Computer System Architecture where Microprocessor have been chosen as the research area.
The first part will research on scheduling mechanism employed in operating system that will be chosen and how it creates and handles the process and thread. The internal system structure and problem faced using these techniques of process control management will also discuss. Besides that, solution used to overcome the identified problems Lastly, Pre-emptive and non-pre-emptive, scheduling, algorithm will be talk about and determine which process algorithm is the best and worst.
The second part will identify major trends affecting the microprocessor performance and design in recent year and differences between microprocessor design goals for laptop. Server, desktop and embedded system will be discussed.OPERATING SYSTEM-PROCESS CONTROL MANAGEMENT.
An operating system (OS) is software which performs all the basic tasks like file management, memory management, process management, handling input and output and controlling peripheral devices. It acts as an intermediary between the user of a computer and the computer hardware. One of the most important functions of the OS is the scheduling of processes, or tasks. In multiprogramming environment, the processor is used by multiple programs or processes therefore the OS decides which process gets the processor when and for how much time. Typically, the hardware will interrupt a running process from time to time to enable the OS to make a new scheduling decision so as to share processor time fairly among a number of processes. (William, Stallings). An operating system does the following activities for processors management by keeping tracks of processor and status of process, allocates the processor (CPU) to a process and later de-allocates processor when a process is no longer needed. Some of popular operating systems include Linux, Mac OS X, Unix, Windows family operating system and in this paper, Window vista has been chosen as the Operating System of the research.
2.1 TYPE OF SCHEDULING MECHANISM USED IN WINDOW VISTAWindows Vista is an operating system that was produced by Microsoft for use on personal computers including home desktop and laptops. The scheduling mechanism employed in windows vista is multilevel feedback Queue (MLFQ) which is a combination of fixed –priority preemptive scheduling, Round robin and first in, First Out algorithms. With fixed priority preemptive scheduling, the scheduler ensures that at any given time, the processor executes the highest priority task of all those tasks that are currently ready to execute. In Windows vista, the dispatcher uses a 32 level priority scheme to determine the order of thread execution. The priorities are divided into two classes and these are the real time class which contains threads with priorities ranging from 16 to 31 and the other is variable class which contains threads having priorities from 0 and 15. The diagram below shows the 32 level priority scheme As shown from the diagram above, every priority level is represented by its own queue, with round –robin scheduling among the high-priority threads and FIFO among the lower-priority ones. In this sense, response time is short for most threads, and short but critical system threads get completed very quickly.
Since threads can only use one time unit of the round-robin in the highest-priority queue, starvation can be a problem for longer high-priority thread. In addition, the kernel may change the priority level of a thread depending on its I/O and CPU usage and Vista also uses a priority scheduler for the I/O queue so that disk defragmenters and other such programs do not interfere with foreground operations.Multilevel Feedback Queue SchedulingModern general purpose operating systems use multi-level feedback queues including windows vista. Silberschatz and Gagne (2010) stated that Multilevel feedback queue (MLFQ) consists of a number of different queues each with a different expected quantum time and priority level. Each queue can use different scheduling algorithm such as FIFO, Round Robin. The most outstanding feature of MLFQ algorithms is that jobs are allowed to move from one queue to another, according to their runtime performance.
For example, when a process is processed in queue 1 and requires a long time, it will be moved to queue 2 and waits for its turn to be processed again while all the process in queue 1 finish processed. This process will continue as much level as how many queues are provided by the dispatcher. In the last queue of MLFQ, it uses the Round Robin scheduling which will continue to provide time until the previous process is executed. By stirring jobs from highest priority queues to lowest priority ones, the algorithm guarantees that short jobs and I/O-intensive processes get to the CPU much earlier. . Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters such as the number of queues, the scheduling algorithm for each queue and the methods used to upgrade or demote processes from one queue to another.
2.2CREATION AND HANDLING OF PROCESS OR THREADA process is basically a program in execution (william stallings,2012). Each process, in turn, contains one or more independently executing threads.
A thread running within a process can create new threads, create new independent processes, and manage communication and synchronization between the object.Under the scheduling model prior to Windows Vista, the kernel scheduler had to make some unfair assumption regard thread execution time because the CPU cycle counters were not considered instead it relied only on the clock interval timer. For example, according to the diagram below, unfairness occur when two threads A and B with same priority become ready to run at the same time. Thread A start running but is interrupted for a while. The time spent handling the interrupt is charged to the thread. Interrupt processing finishes, thread A starts running again, but it quickly hits the next clock interval.
The scheduler can only assume that thread A had been running all this time and now switches to thread B. Thread B starts running and has a chance to run for a full clock interval (barring pre-emption or interrupt handling. The diagram below represents the situation From the illustration above, it can be clearly stated that thread A was unfairly penalized as the the time that it had to spend handling a device interrupt was accounted to its own CPU time, even though the thread had probably nothing to do with the interrupt. It was also unfairly penalized for the time the system was idling inside that clock interval before it was scheduled.On Windows Vista, threads run by default for 2 clock intervals but because of changes in thread run-time accounting in Windows Vista , although threads still run in units of clock intervals, the system does not use the count of clock ticks as the deciding factor for how long a thread has run and whether its quantum has expired.
Instead the scheduler was modified in Windows vista to use the cycle counter register to keep track of exactly how many CPU cycles a thread has executed rather just using an interval –timer interrupt routine. When the system starts up, a calculation is made whose result is the number of clock cycles that each quantum is equivalent to (this value is stored in the kernel variable KiCyclesPerClockQuantum).For instance, Threads A and B become ready to run during the middle of an interval. Thread A starts running but is interrupted for a while.
The CPU clock cycles spent handling the interrupt are not charged to the thread. Interrupt processing finishes, thread A starts running again, but it quickly hits the next clock interval. The scheduler looks at the number of CPU clock cycles that have been charged to the thread and compares them to the expected CPU clock cycles that should have been charged at quantum end. Because the former number is much smaller than it should be, the scheduler assumes that threads A started running in the middle of a clock interval and may have additionally been interrupted.
Thread A gets its quantum increased by another clock interval, and the quantum target is recalculated. Thread A now has its chance to run for a full clock interval. At the next clock interval, thread A has finished its quantum, and thread B now gets a chance to run From the diagram illustrated above the scheduler does not keep track of the interrupt during thread A turn which mean thread A will always obtain a turn on the CPU which would provide greater fairness in terms of time slice.
2.3 INTERNAL SYSTEM STRUCTURE AND PROBLEMS FACED USING THESE TECHNIQUES OF PROCESS CONTROL MANAGEMENT. To make thread-scheduling decisions, the kernel maintains a set of data structures known collectively as the dispatcher database.
The dispatcher database keeps track of which threads are waiting to execute and which processors are executing which threads .To improve scalability, including thread-dispatching concurrency, Windows multiprocessor systems have per-processor dispatcher ready queues in this way each CPU can check its own ready queues for the next thread to run without having to lock the system wide ready queues. The dispatcher ready queues contain the threads that are in the ready state, waiting to be scheduled for execution. There is one queue for each of the 32 priority levels. To speed up the selection of which thread to run or preempt, Windows maintains a 32-bit bit mask called the ready summary (ReadySummary). Each bit set indicates one or more threads in the ready queue for that priority level. (Bit 0 represents priority 0, and so on.).
Instead of scanning each ready list to see whether it is empty or not (which would make scheduling decisions dependent on the number of different priority threads), a single bit scan is performed as a native processor command to find the highest bit set. Regardless of the number of threads in the ready queue, this operation takes a constant amount of time, which is why you may sometimes see the Windows scheduling algorithm referred to as an O(1), or constant time, algorithm.PROBLEM FACED USING THESE TECHNIQUES OF PROCESS CONTROL MANAGEMENT AND SOLUTIONS TO OVERCOME THESE PROBLEMSOne of the major problems faced using MLFQ in windows vista is starvation or indefinite blocking in which a low-priority task can wait forever because there are always some other jobs around that have higher priority. Starvation is a situation where a process is not given the CPU time for it to be executed. One common solution to this problem is aging, in which priorities of jobs increase the longer they wait. . Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time.
Under this scheme a low