Assignment-1
Please download (code) on Chapter 2
1. compile and run Figure 2.1 in Page 3.
2. Run the single program in Page 3.
3. Run many programs at once on Page 4.
4. Compile and run Figure 2.3 on Page 5
5. Run the single program that accesses memory on Page 5
6. Run the multiple programs that access memory on page 6
7. write your observations on how a single cpu can behave like multiple cpus.
Please use screen shots and attach your files to this assignment. Thank you.
Assignment-2
process state changes as it runs on a CPU. As described in the chapter,
processes can be in a few different states:
RUNNING - the process is using the CPU right now
READY - the process could be using the CPU right now
but (alas) some other process is
WAITING - the process is waiting on I/O
(e.g., it issued a request to a disk)
DONE - the process is finished executing
In this homework, we'll see how these process states change as a program
runs, and thus learn a little bit better how these things work.
To run the program and get its options, do this:
prompt> ./process-run.py -h
If this doesn't work, type "python" before the command, like this:
prompt> python process-run.py -h
It will give us process-run.py [options]
The most important option to understand is the PROCESS_LIST (as specified by
the -l or --processlist flags) which specifies exactly what each running
program (or "process") will do. A process consist of instructions, and each
instruction can just do one of two things:
- use the CPU
- issue an IO (and wait for it to complete)
When a process uses the CPU (and does no IO at all), it should simply
alternate between RUNNING on the CPU or being READY to run. For example, here
is a simple run that just has one program being run, and that program only
uses the CPU (it does no IO).
prompt> ./process-run.py -l 5:100
Produce a trace of what would happen when you run these processes:
Process 0
cpu
cpu
cpu
cpu
cpu
Important behaviors:
System will switch when the current process is FINISHED or ISSUES AN IO
After IOs, the process issuing the IO will run LATER (when it is its turn)
prompt>
Here, the process we specified is "5:100" which means it should consist of 5
instructions, and the chances that each instruction is a CPU instruction are
100%.
You can see what happens to the process by using the -c flag, which computes the
answers for you:
prompt> ./process-run.py -l 5:100 -c
Time PID: 0
CPU
IOs
1 RUN:cpu
1
2 RUN:cpu
1
3 RUN:cpu
1
4 RUN:cpu
1
5 RUN:cpu
1
This result is not too interesting: the process is simple in the RUN state and
then finishes, using the CPU the whole time and thus keeping the CPU busy the
entire run, and not doing any I/Os.
Let's make it slightly more complex by running two processes:
prompt> ./process-run.py -l 5:100,5:100
Produce a trace of what would happen when you run these processes:
Process 0
cpu
cpu
cpu
cpu
cpu
Process 1
cpu
cpu
cpu
cpu
cpu
Important behaviors:
Scheduler will switch when the current process is FINISHED or ISSUES AN IO
After IOs, the process issuing the IO will run LATER (when it is its turn)
In this case, two different processes run, each again just using the CPU. What
happens when the operating system runs them? Let's find out:
prompt> ./process-run.py -l 5:100,5:100 -c
Time PID: 0 PID: 1
CPU
IOs
1 RUN:cpu READY
1
2 RUN:cpu READY
1
3 RUN:cpu READY
1
4 RUN:cpu READY
1
5 RUN:cpu READY
1
6
DONE RUN:cpu
1
7
DONE RUN:cpu
1
8
DONE RUN:cpu
1
9
DONE RUN:cpu
1
10
DONE RUN:cpu
1
As you can see above, first the process with "process ID" (or "PID") 0 runs,
while process 1 is READY to run but just waits until 0 is done. When 0 is
finished, it moves to the DONE state, while 1 runs. When 1 finishes, the trace
is done.
Let's look at one more example before getting to some questions. In this
example, the process just issues I/O requests.
prompt> ./process-run.py -l 3:0
Produce a trace of what would happen when you run these processes:
Process 0
io-start
io-start
io-start
Important behaviors:
System will switch when the current process is FINISHED or ISSUES AN IO
After IOs, the process issuing the IO will run LATER (when it is its turn)
What do you think the execution trace will look like? Let's find out:
prompt> ./process-run.py -l 3:0 -c
Time PID: 0
CPU
IOs
1 RUN:io-start
1
2 WAITING
1
3 WAITING
1
4 WAITING
1
5 WAITING
1
6* RUN:io-start
1
7 WAITING
1
8 WAITING
1
9 WAITING
1
10 WAITING
1
11* RUN:io-start
1
12 WAITING
1
13 WAITING
1
14 WAITING
1
15 WAITING
1
16*
DONE
As you can see, the program just issues three I/Os. When each I/O is issued,
the process moves to a WAITING state, and while the device is busy servicing
the I/O, the CPU is idle.
Let's print some stats (run the same command as above, but with the -p flag)
to see some overall behaviors:
Stats: Total Time 16
Stats: CPU Busy 3 (18.75%)
Stats: IO Busy 12 (75.00%)
As you can see, the trace took 16 clock ticks to run, but the CPU was only
busy less than 20% of the time. The IO device, on the other hand, was quite
busy. In general, we'd like to keep all the devices busy, as that is a better
use of resources.
There are a few other important flags:
-s SEED, --seed=SEED the random seed
this gives you way to create a bunch of different jobs randomly
-L IO_LENGTH, --iolength=IO_LENGTH
this determines how long IOs take to complete (default is 5 ticks)
-S PROCESS_SWITCH_BEHAVIOR, --switch=PROCESS_SWITCH_BEHAVIOR
when to switch between processes: SWITCH_ON_IO, SWITCH_ON_END
this determines when we switch to another process:
- SWITCH_ON_IO, the system will switch when a process issues an IO
- SWITCH_ON_END, the system will only switch when the current process is done
-I IO_DONE_BEHAVIOR, --iodone=IO_DONE_BEHAVIOR
type of behavior when IO ends: IO_RUN_LATER, IO_RUN_IMMEDIATE
this determines when a process runs after it issues an IO:
- IO_RUN_IMMEDIATE: switch to this process right now
- IO_RUN_LATER: switch to this process when it is natural to
(e.g., depending on process-switching behavior)
Now answer questions 1-5 at the back of the chapter 4 to learn more.
Please answer briefly the following first five questions using your own words from Chapter 2 and the
next 5 questions from Chapters 4 and 5.
QUESTION 1
What is Virtualization of the CPU
QUESTION 2
What does the spin() function in our program do?
QUESTION 3
Discuss how the operating system virtualize memory in Figure 2.3
QUESTION 4
Define the following terms
a. Fetch
b. Decode
c. Execute
d. PID
e. distributed operating system
f. virtualization
QUESTION 5
Where does a program stores its data structures?
QUESTION 6
If time sharing allows a resource to be used for a little while by one entity, and then a little while by
another. What is its natural counterpart and how does it uses its resources?
CPU, and the resource will process data until the running program is done then the OS
will sit idle or forcefully take the process away
Mechanisms, where it shares low-level methods or protocols to implement a needed piece of programs
using a context switch.
Space sharing , where a resource is divided into blocks among those who wish to use it. once a
block is assigned to a file, it is not likely to be assigned to another file until the user deletes it.
All of the Above
QUESTION 7
There are two processes: Process A and Process B. What are the situations of Process B when it is moved
from Ready to Running and Process A when it has been from Running to Ready
Process B has descheduled and Process A has been blocked.
Process B has been scheduled and Process A has been Descheduled
Process A has been scheduled and Process B has been blocked
Process A has been scheduled and process B has been descheduled.
QUESTION 8
A systrm call is made to keep running (create) the same copies of program, what procedure will be best
used?
fork()
wait()
exec()
kill()
QUESTION 9
A new process has been created which leads to the creation of another process. What are the
characteristics of this new process?
The creating process is called a parent, it has a process identifier, and it will execute after the
created process called a child has executed.
The created process uses create routine called a parent, the process identifier is not needed, and it
will execute first. The mechanic process is called a child.
The created process is called a parent, it has a process identifier, and it will execute first. The
creating process is called a child, it has a process identifier but cannot execute.
The creating process is called a parent, it has a process identifier, and it will execute first. The
created process is called a child, it has a process identifier but cannot execute.
QUESTION 10
There are currently two programs: Program A is running and program B is blocked. What is the name
of the running program?
process
virtualization
convoy effects
algorithms
Virtualization
Advanced OS
Chapter 2
Objectives
Expectations of a Process
Distributed Operating system
Virtualization of CPU
Virtualization of Memory
Expectations of a Process
• Running a program does the following things:
• fetches an instruction from memory,
• decodes it (figuring out which instruction this is),
• and executes it (i.e., it does the thing that it is supposed to do, like multiply numbers
together, memory access, conditional check, jump to a function, allow access to other
devices and so on).
• After it is done with this instruction, the processor moves on to the next instruction,
and so on, and so on, until the program finally completes1
Operating System
• Will learn that while a program executes, a lot of other wild things are going on
with the primary goal of making the system easy to use.
• Operating system (OS) software makes it easy to run program and programs at the same
• Allows programs to share memory
• Enables programs to interact with devices, and other fun stuff like that.
• Makes sure processes operate correctly and efficiently in an easy-to-use manner.
• The primary way is through an overall technique that we call virtualization.
• OS takes a physical resource like the processor, or memory, or a disk and transforms it into a
more general, powerful, and easy-to-use virtual form of itself. Refers to the operating system as a
virtual machine
Operating System – cont.
• Allows users to tell the OS what to do and thus make use of the features of the virtual machine (such as running a
program, or allocating memory, or accessing a file),
• It provides some interfaces (APIs) that you can call.
•
•
A typical OS, exports a few hundred system calls that are available to applications.
Provides calls to run programs, access memory and devices, and other related actions,
• Provides a standard library to applications
• OS as a resource manager allows virtualization of many programs to run (sharing of CPU), programs to concurrently
access their own instructions and data (sharing of memory), and many programs to access devices (sharing of disks and)
•
Each of the hardware CPU, memory, and disk is a resource of the system; so it is the role of the operating system to
manage those resources in efficient or fairly or indeed with many other possible goals.
• To understand the role of the OS a little bit better, let’s take a look at some examples.
Virtualization of CPU
• First program in Figure 2.1.
• It calls a function called spin() that repeatedly checks the time and returns
once it has run for a second
• Then, it prints out the string that the user passed in on the command line,
and repeats, forever.
• Let’s say we save this file as cpu.c and decide to compile and run it on a
system with a single processor (or CPU as we will sometimes call it). Here is
what we will see:
Figure 2.1: Simple Example: Code That Loops
and Prints (cpu.c)
•
•
•
•
•
#include
#include
#include
#include
#include "common.h"
• int
• main(int argc, char *argv[])
•
•
•
•
•
•
•
•
•
•
•
•
{
if (argc != 2) {
fprintf(stderr, "usage: cpu \n");
exit(1);
}
char *str = argv[1];
while (1) {
Spin(1);
printf("%s\n", str);
}
return 0;
}
Running one Program.
•
•
•
•
•
•
•
•
prompt> gcc -o cpu cpu.c –Wall
prompt> ./cpu "A"
A
A
A
A
ˆC
prompt>
Figure 2.2 Running many Programs
•
prompt> ./cpu A & ; ./cpu B & ; ./cpu C & ; ./cpu D &
•
[1] 7353
•
[2] 7354
•
[3] 7355
•
[4] 7356
•
A
•
B
•
D
•
C
•
A
•
B
•
D
•
C
•
A
•
C
•
B
•
D
Running many Programs – cont.
• Though we have only one processor, somehow all four of these programs seem to be
running at the same time!
• How does this happen?
• The operating system, with some help from the hardware, is in charge of the system and
makes it seems like there are very large number of virtual CPUs (illusion).
• Turning a single CPU (or small set of them) into a seemingly infinite number of CPUs and
thus allowing many programs to seemingly run at once is what we call virtualizing the CPU
•
Running many Programs – cont.
•
•
•
•
The capacity to run multiple programs at once brings up queries like:
Which program should run at a particular time?.
Which program should be served first?
Policies are used in many different places within an OS to answer these types
of questions
Virtualization of Memory
•
•
•
•
•
•
Model of physical memory:
is an array of bytes;
To read memory, state an address to access the stored data
to write (or update) memory state the data to be written to the given address.
Memory is accessed all the time when a program is running.
A program keeps all of its data structures in memory, and are made available through different instructions,
like loads and stores or other explicit instructions that access memory in doing their work.
• Also memory is accessed on each instruction fetch.
• Let’s take a look at a program (in Figure 2.3) that allocates some memory by calling malloc(). The output of
this program can be found here:
Figure 2.3: A Single Program that Accesses
Memory (mem.c)
#include
#include
#include
#include "common.h"
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
int
main(int argc, char *argv[])
{
int *p = malloc(sizeof(int));
// a1
assert(p != NULL);
printf("(%d) memory address of p: %08x\n",
getpid(), (unsigned) p);
*p = 0;
// a2
// a3
while (1) {
Spin(1);
*p = *p + 1;
printf("(%d) p: %d\n", getpid(), *p);
}
return 0;
}
// a4
Single Program
•
•
•
•
Line a1 shows allocation of some memory
Line a2 prints out the address of the memory
Line a3 puts the number zero into the first slot of the newly allocated memory
Line a4 it loops, delaying for a second and incrementing the value stored at the
address held in p.
• With every print statement, it also prints out what is called the process identifier (the
PID) of the running program.
• PID is unique per running process.
Output of Program
•
•
•
•
•
•
•
•
prompt> ./mem
(2134) memory address of p: 00200000
(2134) p: 1
(2134) p: 2
(2134) p: 3
(2134) p: 4
(2134) p: 5
ˆC
Figure 2.4: Running The Memory Program
Multiple Times
•
prompt> ./mem &; ./mem &
•
[1] 24113
•
[2] 24114
•
(24113) memory address of p: 00200000
•
(24114) memory address of p: 00200000
•
(24113) p: 1
•
(24114) p: 1
•
(24114) p: 2
•
(24113) p: 2
•
(24113) p: 3
•
(24114) p: 3
•
(24113) p: 4
•
(24114) p: 4
Multiple times
• Each running program has allocated memory at the same address
(00200000),
• Each seems to be updating the value at 00200000 independently
• It is as if each running program has its own private memory, instead of
sharing the same physical memory with other running programs
• Each process accesses its own private virtual address space
• OS is virtualizing memory
Process
Advanced OS
Chapter 4
Objectives
Introduction
The Concepts of A Process
Process API
Process Creation
Process States
Data Structures
Summary
Introduction
• Fundamental concept OS provides to users: the process.
• Process is define as a running program
•
•
•
•
A program is a lifeless thing which sits on a disk
a group of instructions waiting for action.
The operating system takes the bytes to run and transform the program into something useful.
It allows users to run more than one program at once; think about your desktop or laptop where you
run a web browser, mail program, a game, a music player, and so forth.
• a typical system may seems running tens or even hundreds of processes at the same time.
• Making the system easy to use, not be concerned with whether a CPU is available; one simply runs
programs. Hence our challenge:
Challenge
• THE CRUX OF THE PROBLEM:
• HOW TO PROVIDE THE ILLUSION OF MANY CPUS?
• Although there are only a few physical CPUs available, how can the OS
provide the illusion of a nearly-endless supply of said CPUs
Illusion
• The OS creates this illusion by virtualizing the CPU.
• Uses technique, known as time sharing of the CPU
• runs one process, then stops it and runs another, and so forth
• Allowing users to concurrently run many process as they would like
• It promotes the illusion that many virtual CPUs exist when in fact there is only one physical CPU (or a
few).
• The setback of virtualization is “performance” as each will run more slowly if the CPU(s)
must be shared.
• Uses low-level machinery called mechanism and high-level intelligence to implement
virtualization well
Illusion – cont.
• Mechanisms are low-level methods or protocols that implement a needed piece of functionality. For
example,
•
•
a context switch, gives the OS the ability to stop running one program and start running another on a given CPU
Modern CPUs employ time-sharing mechanism
• Intelligence resides on top of these mechanisms in the OS, in the form of policies.
•
Policies are algorithms for making some kind of decision within the OS.
•
•
For example, given a number of possible programs to run on a CPU, which program should the OS run?
A scheduling policy in the OS will make this decision, likely using
•
•
historical information (e.g., which program has run more over the last minute?),
workload knowledge (e.g., what types of programs are run), and performance metrics (e.g., is the system optimizing for
interactive performance, or throughput?) to make its decision.
Time Sharing and Space Sharing
• Basic techniques used by an OS to share a resource.
• Allows a resource (CPU, or a network link)to be used for a little while by one
entity, and then a little while by another, and so forth can be shared by many.
• The natural counterpart of time sharing is space sharing, where a resource is
divided (in space) among those who wish to use it. For example, disk space is
naturally a space-shared resource, once a block is assigned to a file, it is not
likely to be assigned to another file until the user deletes it.
Operating System API
• APIs, are available on modern operating system.
• • Create: An operating system must include some method to create new processes.
•
When a command is typed into the shell, or double-click on an application icon, the OS is invoked to create a new process to run the
program you have indicated.
• Destroy: As there is an interface for process creation, systems also provide an interface to destroy processes forcefully.
•
Some processeswill run and just exit by themselves when complete; when they don’t, however, the user may wish to kill them, and thus
an interface to halt a runaway process is quite useful.
• Wait: Sometimes it is useful to wait for a process to stop running; thus some kind of waiting interface is often provided.
• Miscellaneous Control: Other than killing or waiting for a process, there are sometimes other controls that are possible.
For example, most operating systems provide some kind of method to suspend a process (stop it from running for a while)
and then resume it (continue it running).
• Status: There are usually interfaces to get some status information about a process as well, such as how long it has run for,
or what state it is in.
Process Creation: A Little More Detail
•
•
•
•
One mystery that we should unmask a bit is how programs are transformed into processes.
•
Programs initially reside on disk (or, in some modern systems, flash-based SSDs) in some kind of executable format; thus, the process of
loading a program and static data into memory requires the OS to read those bytes from disk and place them in memory somewhere (as shown
in Figure 4.1).
•
•
•
In early (or simple) operating systems, the loading process is done eagerly, i.e., all at once before running the program;
Specifically, how does the OS get a program up and running?
How does process creation actually work?
The first thing that the OS must do to run a program is to load its code and any static data (e.g., initialized variables) into memory, into the
address space of the process.
modern OSes perform the process lazily, i.e., by loading pieces of code or data only as they are needed during program execution.
Remember that before running anything, the OS clearly must do some work to get the important program bits from disk into memory.
The Abstraction A Process
• We will call the abstraction provided by the OS of a running program as process.
• a process is simply a running program
• What constitutes a process?
• It is summarizing the different pieces of the system it accesses or affects when it is running at any given time
• To understand what constitutes a process, we have to understand its machine state: what a
program can read or update when it is running.
• At any given time, what parts of the machine are important to the execution of this program?
• Memory is one obvious component of machine state that covers a process. The reading and update of the
data of the running program and instructions stay in memory. So the memory that the process can address
(called its address space) is part of the process.
The Abstraction A Process – cont.
• Also part of the process’s machine state are registers; many instructions explicitly read or
update registers and thus clearly they are important to the execution of the process.
• Note that there are some particularly special registers that form part of this machine state. For
example, the program counter (PC) (sometimes called the instruction pointer or IP) tells us which
instruction of the program is currently being executed;
• similarly a stack pointer and associated frame pointer are used to manage the stack for function
parameters, local variables, and return addresses.
• Finally, programs often access persistent storage devices too. Such I/O information might
include a list of the files the process currently has open. Program is currently being
executed; similarly a stack pointer and associated frame pointer are used to manage the stack
for function parameters, local variables, and return addresses.
Figure 4.1 Loading from Program to Process
CPU
Memory
Code
Static
Data
Heap
Process
Stack
Process
Code
Static
Data
Program
Disk
Loading takes on disk program
and loads it to the address
space of process
Process Creation – cont.
• OS performs other initialization tasks,
•
•
Mainly as related to input/output (I/O).
Each process by default has three open file descriptors in Unix Systems
•
•
•
•
•
standard input,
•
The OS (finally) set the stage for program execution. It thus has one last task: to start the program running at the entry point,
namely main(). By jumping to the main() routine using a specialized mechanism will be discussed in Chapter 5
•
the OS transfers control of the CPU to the newly-created process, and thus the program begins its execution.
output,
and error;
descriptors let programs easily read input from the terminal as well as print output to the screen.
They load the code and static data into memory, by creating and initializing a stack, and by doing other work as related to I/O
setup,
Process State
• Now that we have some idea of what a process is (though we will continue to refine this notion),
and (roughly) how it is created, let us talk about the different states a process can be in at a given
time.
• a process can be in one of three states:
• Running: In the running state, a process is running on a processor. This means it is executing
instructions.
• Ready: In the ready state, a process is ready to run but for some reason the OS has chosen not to
run it at this given moment.
• Blocked: In the blocked state, a process has performed some kind of operation that makes it not
ready to run until some other event takes place. A common example: when a process initiates an
I/O request to a disk, it becomes blocked and thus some other process can use the processor.
Process State Transition
Descheduled
Running
Scheduled
I/O Ready
Ready
I/O Done
Blocked
Process State Transition – cont.
• In the diagram, a process can be moved between the ready and running states at the discretion of
the OS.
• Being moved from ready to running means the process has been scheduled
• Being moved from running to ready means the process has been descheduled.
• Once a process has become blocked (e.g., by initiating an I/O operation), the OS will keep it as
such until some event occurs (e.g., I/O completion); at that point, the process moves to the ready
state again (and potentially immediately to running again, if the OS so decides).
• Let’s look at an example of how two processes might transition through some of these states.
First, imagine two processes running, each of which only use the CPU (they do no I/O). In this
case, a trace of the state of each process might look like this (Figure 4.3). Time Process0
Process1 N
Figure 4.3: Tracing Process State: CPU Only
•
•
•
•
•
•
•
•
•
Time
Process0
Process1
1
Running
Ready
2
Running
Ready
3
Running
Ready
4
Running
Ready
5
–
Running
6
–
Running
7
–
Running
8
–
Running
Notes
Process0 now done
Process1 now done
Tracing Process State: CPU and I/O
• In this next example, the first process issues an I/O after running for some time.
• At that point, the process is blocked, giving the other process a chance to run.
Figure 4.4 shows a trace of this scenario.
• More specifically, Process0 initiates an I/O and becomes blocked waiting for it to
complete; processes become blocked, for example, when reading from a disk or
waiting for a packet from a network
• The OS recognizes Process0 is not using the CPU and starts running Process1.
While Process1 is running, the I/O completes, moving Process0 back to ready.
Finally, Process1 finishes, and Process0 runs and then is done.
Figure 4.3: Tracing Process State: CPU and I/O
• Time
1
2
3
4
5
6
7
8
9
10
Process0
Running
Running
Running
Blocked
Blocked
Blocked
Ready
Ready
Running
Running
Process1
Ready
Ready
Ready
Running
Running
Running
Running
Running
–
–
Notes
Process0 initiates I/O
Process0 is blocked
so Process1 runs
I/O done
Process1 now done
Process0 now done
Data Structures
• The OS is a program, and like any program, it has some key data structures that track
•
•
•
•
various relevant pieces of information.
To track the state of each process, for example, the OS likely will keep some kind of process
list for all processes that are ready, as well as some additional information to track which
process is currently running.
The OS must also track, in some way, blocked processes; when an I/O event completes,
the OS should make sure to wake the correct process and ready it to run again
When a process is stopped, its registers will be saved to this memory location; by restoring
these registers (i.e., placing their values back into the actual physical registers), the OS can
resume running the process.
Process API Introduction
•
•
•
•
The process interlude covers practical aspects of systems
we discuss process creation in UNIX systems.
Uses a pair of system calls: fork() and exec() to create a new process
A third routine, wait() is also used by a process wishing to wait for a process
it has created to complete.
• We now present these interfaces in more detail, with a few simple examples
The Fork() System Call
• The fork() system call is used to create a new process
• However it is certainly the strangest routine you will ever call.
• Specifically, you have a running program whose code looks like what you see
in Figure 5.1; examine the code, or better yet, type it in and run it yourself!
Calling fork() –cont.
Notice that the output is not deterministic.
• With the child process created, we now have two active processes in the system: the parent and the child.
• Pretending we are using a single CPU, then either the child or the parent might execute at any point. In the
example (above), the parent did and thus printed out its message first. In this scenario, the opposite might
happen, as we show in this output trace:
•
•
•
•
prompt> ./p1
hello world (pid:4426) hello, I am child (pid:4427) hello, I am parent of 4427 (pid:4426)
prompt>
The CPU scheduler determines which process runs at a given moment in time;
•
•
We cannot make assumptions which one it will choose and hence which process will run first.
The nondeterminism, as it turns out, leads to some interesting problems, particularly in multi-threaded programs
Exec()
• A final and important piece of the process creation API is the exec() system call
• This system call is useful when you want to run a program that is different from the calling program. For
example, calling fork() in p2.c is only useful if you want to keep running copies of the same program.
•
This is useful when you want to run a different program; the exec() does just that (Figure 5.3,).
•
In the example, the child process calls execvp() in order to run the program wc, is the word counting program. In fact, it
runs wc on the source file p3.c, thus telling us how many lines, words, and bytes are found in the file:
• prompt> ./p3
• hello world (pid:29383) hello, I am child (pid:29384) 29 107 1030 p3.c hello, I am parent of 29384
(wc:29384) (pid:29383) prompt>A final and important piece of the process creation API is the exec() system
call3.
Summary
• We introduced the most basic abstraction of the OS: the process. It is quite
simply viewed as a running program.
• : the low-level mechanisms needed to implement processes,
• and the higher-level policies required to schedule them in an intelligent way.
• By combining mechanisms and policies, we will build up our understanding
of how an operating system virtualizes the CPU.
Purchase answer to see full
attachment