os assignment

timer Asked: Nov 16th, 2016

Question description

provide reasons to support your recommendation:

a) Summarize the OS scheduling policies (e.g. FCFS, SRT, SPN, RR, etc.) and recommend the most reliable and efficient policy for this new operating system.

b) Which of the process scheduling algorithms is most likely to lead to starvation?

c) A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts with the lowest priority. The scheduler re-evaluates the process priorities every T time units and decides the next process to schedule. Discuss this concept with respect to the authors’ position

Multicore Efficient Scheduling for Operating System to Avoid Congestion In Client-Server Architecture Anchal Thakur1 and Ravinder Thakur2 1 Research Scholar, CSE Department, L.R Institute of Engineering and Technology, Solan, India spiritual.531@gmail.com 2 Assistant Professor, CSE Department, L.R Institute of Engineering and Technology, Solan, India er_thakur83@rediffmail.com Abstract—In modern day and age, multicore processors along with parallel programming plays a significant role in enhancing the performance of the applications. The performance of processor solely depends upon how efficient the parallel programming has been implemented in the system. In this paper parallel programming application is developed for those environments which rely on multi-user, multi-tasking and multicore features to avoid congestion. A framework for ClientServer system is proposed to coordinate the partitioned scheduling of tasks on a multicore platform of a server to share the resources among users with improved operating efficiency and lesser overheads. For testing the performance of the proposed framework with the default, a real world example of banking system, having single server and multiple users, is considered. The results obtained with partitioned scheduling is compared with the default and result shows proposed scheduling gained the 2x speedup over default scheduling. Keywords—Client-Server Systems, Multicore Multicore Utilization, Parallel Processing. Processing, I. INTRODUCTION In today’s age of multi-tasking in a multicore environment a great importance is given to execute parallel tasks with a greater efficiency. A processor is a core component of a computer that processes the instruction and executes the task with a certain speed. The speed with which the processor handles the instruction and execute a task rely on the size of processors. In other words bigger the processor faster it is. In past, computers were manufactured with single core processor which dominated the computer industry for many years. The amount of data that a processor processes impacts the overall efficiency and speed of the computer. There was a time when the computers were only used to execute simple task but in last one decade the use of computer to perform complex task has increased. New and bigger processors were developed which used higher clock frequency to overcome the The Manuscript submitted on 17th August, 2015. c 978-1-4673-7910-6/15/$31.00 2015 IEEE issue of speed. However these bigger processors were not efficient and they consumed a lot of energy resulting in overheating. To achieve the efficiency and speed, CPU architectures have developed a multicore technology in which there are more than one independent units or core inside a single chip. The biggest difference that can be noticed in a multicore system is the improved response-time while running big applications. Additional benefits like better performance, power management, faster execution time and multi-threading technique can also be achieved by using multicore processors. Since 2000 multiprocessor have been extensively used to achieve better performance in multilevel environment. Parallel programming further enhances the performance of the desktop based applications, web applications and softwares to a greater extent. Multicore technology has itself the embedded feature to run the tasks parallel as availability of multiple cores inside a single chip. The main objective of multicore processor architecture is the extraction of higher performance from multicores which depends upon an efficient parallel programming mechanism and its implementation. In response to this, main objective of this research paper is to develop parallel algorithm that can make best use of multicore technology. Most software companies only consider user requirements, when launching their softwares and do not put any consideration to the software efficiency in the multicore platform. These days the software developers are required to give a full consideration to kind of computer architecture the software will run on. This could include multi-processor, multicore, etc. Application are expected to perform the better by using more cores, hardware threads and higher memory thereby meeting growing demands for performance and efficiency. Most of the research work that has already been carried out by using multicore platform to improve the system efficiency 1549 includes load balancing, power efficiency, dynamic thread scheduling, dynamic scheduler and many more. For our work, load balancing is already an advantage in which we are focusing on single server system, where resources are shared through a user interface by multiple clients. In this case study we develop the application which provides a parallel framework at the server side, when accessed by multiple clients. These days most enterprises work on an online system where there is a common user interface that is used for sharing and exchanging information with the server. For instance consider a banking system, where every branch of the bank has multiple clients/users that access the server resources through a user interface or web interface. Certain operations like insertion, updation, selection and deletion are fired by the branch users from various branches to the server. In a situation where a server goes down will have affected every single user who is either connected to the server or trying to connect to server. Using proposed algorithm we can increase the speed with which the data is being exchanged and transferred at the user interface. We develop a desktop based application considering a real world example of banking system to implement parallel data programming. Our main objective is to utilize all the cores available in the system to run the application. This will be achieved by distributing the task on each single core. The work includes study of client-server model where the multiple clients request to a server for retrieving, selecting, updating and deleting of data. Our focus is to provide the efficient partitioning scheduling framework for distributing the tasks on the server in multicore platform to speed up the operating efficiency of server and to decrease the overheads of tasks. II. LITERATURE REVIEW This section gives an overview of some of the applications proposed for the multicore platform for utilizing the cores. Kota Sujatha, et al. [1] given Multicore Parallel Processing Concepts for Effective Sorting and Searching. Applications programs related to sorting and searching are developed encapsulating parallel processing and are tested on huge database. Experimental results based on bubble sort and linear search show that it is easier to get work done if load is shared and also quicker by parallel processing with multicore utilization compared to sequential processing. Vinay G. vaidya,priti ranadive and sudhakar[2] proposed a dynamic scheduling algorithm in order to increase the utilization of multicores processors. The scheduler resides on all the cores of multicore and accesses a shared Task Data Structure [TDS]to pick up ready-to-execute tasks. Xiaozhong Geng, Gaochao Xu, Yuan Zhang[3] explained some major influencing factors about muti-core load balancing based on chip Multi-Processor. They represents dynamic load balancing scheduling problem as a quintuple which gives a 1550 formal description about all factors that affect multi-core processors load balancing. Keqin Li [4] has given optimal partitioning of a multicore server processor in a cloud computing environment. Ashwin, Anand [5] described the methods for restructuring communication algorithms to optimize their performance on multi-core processors. These methods are applied to optimize a PSK demodulator and the effects of the identified factors. Arunachalam Annamalai, Rance Rodrigues, Israel Koren and sandip kundu[6] proposed novel dynamic thread scheduling scheme that continuously monitors the current characteristics of the executing threads and determines the best thread to core assignment. Sudhakar Sah and Vinay G.Vaidya[7] developed a scheduling algorithm that considers dependency information at task level, dependency release information within tasks and load balancing for execution time. Tasks can be executed even before its dependent task completes its execution by using accurate task release information. Geunsik Lim and SangBum Suh[8] have given a user-space memory scheduler that allocates the ideal memory node for tasks by monitoring the characteristics of non-uniform memory architecture. Maikon A.F Bueno, Jose A. M. de Holanda, Erinaldo Pereira and Eduardo Marque[9] developed scheduler for heterogeneous multicore architectures and presented the heuristic approach to determine the performance projections of any process running to all processors of the architecture. Through projection the processes are migrated to the processors that have most suitable resources to each process. Rajkumar Sharma and Priyesh Kanungo[10] have given dynamic load balancing algorithm for heterogeneous multicore cluster, based on node assignment factor that is calculated by a general user and data proximity policy, which efficiently utilize the power of multicore processors. James H. Anderson and John M. Calandrino[11] proposed scheduling method for real-time systems implemented on multicore platform that encourages certain group of tasks to be scheduled together while ensuring real-time constraints. Scheduling method share a common working set executed in parallel, which makes effective use of shared caches. Malith Jayasinghe, Zahir Tari and Panlop Zeephongsekul[12] proposed Multi-Level-Multi-Server Task assignment policy with Work-Conserving Migration for clustered based system. Policy attempts to resolve the cores issues associated with existing size-based task assignment policies and utilised a 2-level variance reduction mechanism with supported work-conserving migration. III. PROPOSED CLIENT-SERVER MODEL In this section we are going to describe the proposed application programs developed for client-server architecture. The application consists of three major programs which have been shown in the flow chart Figure 1. 1)RMI Registry: It is a mechanism for communication between two machines (Client-server) running JVM. It allows application to call object at remote end. The registry process has to be started at first. It creates the port where naming rmiregistry will be running. RMI provides the mechanism by which the server and the client communicate and pass information back and forth and must be running on the server. 2015 International Conference on Green Computing and Internet of Things (ICGCIoT) Clients are given access to remote objects through this. Since the server uses the rmiregistry, bind an instance of the object with the name that will be used to look up the object. 2) Client interface: In order to share information with the server, a user logs a request using a client-interface, which acts as the front end of the application. For our scenario the bank user use application (client-interface) to access the account details from the server. 3) Server interface: A server interface provides services when a request is made by either one or more than one client to share the resources stored in remote database. Server interface acts as the back-end of the application where the resources are saved. In our bank customer example it’s the remote server database where the account detail of a bank customer has been stored. IV. IMPLEMENTATION As mentioned earlier and shown in Figure.1 RMI establish communication link between client and the server which are running Java virtual machines (JVM). The registry process will start first and will create a port where naming “rmiregistry” will be running. Clients are given access to remote objects through this port. The server calls the registry to associate (or bind) a name with a remote client-interface. Server binds scheduler and exports itself to a port on the server machine. This registry is also accessed from a remote client-interface by the port. When a client attempts to connect with a server using a client interface the look up registry is sent to port and connection via stub object is established. Once a connection between clientinterface and server- interface is established, confirmed the user logs their request. In a multiple user system where multiple requests are logged in parallel, the results that are received back to the client interface are also in parallel. In such parallel environment there is risk of cross mismatching of retrieved data which will impact the integrity of the data. To maintain the integrity of retrieved data and reduce cross mismatching, a unique token number is used. When a request is logged through client-interface a token, which is a unique integer value is generated and assigned to that particular request. Once the data is retrieved from the database it is pushed back to correct client interfaces using same assigned tokens. The lists of requests are created on the server and at same time token number is generated for every request. Token streamlines the number of requests to the server. The scheduler then scans the list and divides it once it picks up the first token that asks for getResult(). For example if there are 500 tokens and the first token that asks for getResult() is the 100th one, then the scheduler will pick the first 100 token and will send for further analysis. Once the list of tokens get partitioned a cached thread pool is created which consist of threads that will call tasks, return result to client and close the connection. Whenever a thread is needed to execute a request, a pool either returns a thread from cache which has been previously constructed and is still available or will create new threads if there are no existing ones. Threads only last for few seconds and therefore will be terminated if not used for 60s by default. Thus, thread that remains idle for long enough will not consume any resources. This pool typically improves the performance of application as it executes many short-lived asynchronous tasks. Connection to the database is established at same time the requests are being distributed to multicores. Cores only process the requests and results are retrieved from the database. As the application is dealing with the multitasking and multiuser in a multicore environment, a lock mechanism is already maintained in the database for data consistency. This helps in preventing resource sharing conflicts. Scheduler takes completed tasks and processes their results in the order they complete and return back to the client in accordance to their respective token number to make the consistency. After the result is retrieved by the client the token list is cleared and list is refreshed with new requests on the server. Once the clients get the result, its respective token number becomes the member of the garbage collection. Fig. 1. Flowchart of the implementation of the Proposed Scheduling for Client-Server model. 2015 International Conference on Green Computing and Internet of Things (ICGCIoT) 1551 V. RESULT ANALYSIS To analyse the performance of proposed scheduler, various requests in form of user sets are generated randomly and tested with the proposed partitioned scheduling framework and default scheduling on the MAC OS X Yosemite Intel core i7 CPU platform using virtual environment. analysis. Table 1 shows the average execution time taken by different user sets on default scheduling, proposed scheduling and speedup gained in the performance. Speedup is taken as a metric for relative performance improvement. The average execution time taken by proposed and default scheduling are plotted on graph as shown in figure 2. TABLE 1. Implementation Results of Application. Maximum User sets Default scheduling (in ms) Proposed Scheduling (in ms) Performance /Speed up 1000 26172 14579 1.80 3000 73500 39166 1.88 5000 141332 74443 1.90 7000 207479 106465 1.95 9000 277669 136835 2.03 11,000 398952 186874 2.13 13,000 538582 248279 2.17 15,000 738304 328955 2.24 Testing is carried out for two different case studies: Case 1: Scheduler in which requests collected in the list with token number and submitted to the cores using partitioned framework with cached thread policy. Case 2: For the default operating system scheduling in which list filled up with requests along token number submitted to the operating system in multicore platform. Experimental results are compared with the default operating system scheduling. We have assumed execution time in milliseconds for different user sets. Fig. 2 Execution Time Comparison: Proposed vs. Default. Five different user sets are taken and tested on proposed scheduling and default scheduling for the result analysis. All the experimental results are summed up in the Table 1 for the 1552 Fig. 3. Speedup Gain for Application. The results of testing done on the local host shown in Table 1 depicts that the proposed partitioned scheduling mechanism for client-server model achieved 2x speedup over default scheduling and increases the overall performance of the system. Speedup gained by the application on the different user sets is shown in the figure 3. VI. CONCLUSION In this paper we have proposed a partitioned framework that provides an efficient scheduling for executing tasks in parallel in a multicore environment. This is achieved by fully utilising the cores in which maximum number of tasks can be executed in parallel. For a particular website or application, tasks are executed on multicore by operating system on priority basis. Since for an operating system the user application is single task, our proposal gives priority to the user application. For a multi-user and multi-tasking in a multicore environment the client-server architecture is the most favourable one. Proposed program collect requests and partition them to distribute in parallel on server’s multicore to implement application level parallelism, maintain integrity and speed up the performance. Scheduling has been implemented to avoid overheads and to improve the efficiency for executing task when the server is loaded with number of requests. The test result shows that the proposed scheduling provides better efficiency and achieved the 2x speedup over the default one. In the domain of multicore processor scheduling and synchronization, proposed application develop in this paper is applicable to the application level and can be extended to the web level applications and mobile based applications. Proposed approach will be implemented to real time systems and study the performance gained by the algorithm on these systems. 2015 International Conference on Green Computing and Internet of Things (ICGCIoT) REFERENCES [1] Kota Sujatha et al. “Multicore Parallel Processing Concepts for Effective Sorting and Searching”, 2015 International Conference on Signal Processing And Communication Engineering Systems (SPACES),pp.162- 166,IEEE 2015. [2] Vaidya, Vinay G., Priti Ranadive, and Sudhakar Sah."Dynamic scheduler for multi-core systems." In Software Technology and Engineering (ICSTE), 2010 2nd International Conference on, vol. 1, pp. V1-13, IEEE, 2010. [3] Xiaozhong Geng,Gaochao Xu,Yuan Zhang.”Dynamic load balancing scheduling model based on multicore Processors”2010 fifth International conference on Frontier of computer Science and Technology,pp.398-403,IEEE 2010. [4] Li, Keqin.”Optimal Partitioning of a Multicore Server Processor.” Parallel and Distributed Processing Symposium Workshop & PhD Forum (IPDPSW), 2012 IEEE 26th International , pp. 1803-1811. IEEE 2012. [5] Ashwin Prasad, Anand kodaganur “Restructuring Communication Algorithms for Improved Performance on Multi-core Processors.”, National Conference on Communications, IIT Kanpur,pp.235-239, 2007. [6] Arunachalam Annamalai, Rance Rodrigues, Israel Koren and Sandip Kundu. ”Dynamic Thread Scheduling in Asymmetric Multicores to Maximize Performance-per-watt.”2012 6th International Parallel and Distributed Processing Symposium Workshops & PhD Forum,pp.964971,IEEE2012. [7] Sudhakar Sah and Vinay G.Vaidya.“Dependency aware ahead of time static scheduler for multicore.” In 2014 IEEE/ACIS 13th International Conference on Computer and Information Science (ICIS), pp.337342.IEEE, 2014. [8] Geunsik Lim and Sang-Bum Suh.“User-Level Memory Scheduler for Optimizing Application Performance in NUMA-Based Multicore Systems.”In Software Engineering and Service (ICSESS),2014 5th IEEE International Conference on, pp.240-243.IEEE,2014. [9] Maikon A.F Bueno, Jose A. M. de Holanda, Erinaldo Pereira and Eduardo Marque."Operating system support to an online hardwaresoftware co-design scheduler for heterogeneous multicore architectures." In Embedded and Real-Time Computing Systems and Applications (RTCSA), 2014 IEEE 20th International Conference on, pp. 1-10. IEEE, 2014. [10] Rajkumar Sharma and Priyesh Kanungo” Dynamic Load Balancing Algorithm for Heterogeneous Multi-Core Processors Cluster”, 2014 Fourth International Conference on Communication Systems and Network Technologies,pp.288-292,IEEE 2014. [11] James H. Anderson and John M. Calandrino. "Parallel task scheduling on multicore platforms." ACM SigBED Review 3, vol no. 1, pp.16,IEEE 2006. [12] Malith Jayasinghe, Zahir Tari, and Panlop Zeephongsekul. "Multi-level multi-server task assignment with work-conserving migration." In Network Computing and Applications (NCA), 2010 9th IEEE International Symposium on, pp. 178-181. IEEE, 2010. 2015 International Conference on Green Computing and Internet of Things (ICGCIoT) 1553

Tutor Answer

(Top Tutor) Studypool Tutor
School: Cornell University
Studypool has helped 1,244,100 students
flag Report DMCA
Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University

1271 Tutors

California Institute of Technology

2131 Tutors

Carnegie Mellon University

982 Tutors

Columbia University

1256 Tutors

Dartmouth University

2113 Tutors

Emory University

2279 Tutors

Harvard University

599 Tutors

Massachusetts Institute of Technology

2319 Tutors

New York University

1645 Tutors

Notre Dam University

1911 Tutors

Oklahoma University

2122 Tutors

Pennsylvania State University

932 Tutors

Princeton University

1211 Tutors

Stanford University

983 Tutors

University of California

1282 Tutors

Oxford University

123 Tutors

Yale University

2325 Tutors