首页 > > 详细

辅导CSC 246-001 Homework Assignment 41

CSC 246-001 Homework Assignment 4
-------------------------------------------------------------------------------------------------------------------------------Submission Deadline: Friday, April 8, 11:59 pm
-------------------------------------------------------------------------------------------------------------------------------
This is a longer homework with total points of 120. Please start early!

As always, be sure your code compiles and runs on the EOS Linux machines before you submit. Also, remember that your programs need to be commented and consistently indented, and that you must document your sources and write at least half your code yourself from scratch.

Problem 1: Short Answer Questions (20 pts, submit "prob_1.txt")
1. List and explain the four necessary conditions for deadlock
2. Explain the swapping concept and why we need it
3. Explain internal and external fragmentation
4. Explain the hit ratio (for the TLB)
5. What is an inverted page table?

Problem 2: Deadlock Detection (20 pts, submit prob_2.txt, prob_2.pdf)
Suppose there are three processes P1, P2, and P3. There are four types of resources R1, R2, R3, R4. R1 and R2 have only one instance. R3 has two instances. R4 has four instances. Consider the following resource allocation/request situation:
P1:
Allocated: {R1}, request: {R3}
P2:
Allocated: {R2,R3}, request:{R1,R4}
P3:
Allocated: {R3,R4}, request: {R2}
(a) Draw the resource allocation graph for the three running processes. (10 pts, submit prob_2.pdf)

(b) Is there a deadlock in the system? If so, please give the cycle of waiting processes. If not, give a possible sequence of processes’ completion. (10 pts, submit prob_2.txt)

Problem 3: 50-Percent Rule (25 pts + 10 pts bonus, submit "prob_3.c", "prob_3.pdf" and "prob_3.txt")
In a running program, a function like malloc() performs the job of dynamic memory allocation. This function has to carve up the big block of memory that the OS has provided into smaller blocks as your program needs them. If malloc() runs out of memory, it asks the OS to extend the big block of memory used for your program's heap. In handling memory requests inside a process, malloc() has to deal with the same kinds of fragmentation concerns that the OS does. For this problem, you get to investigate dynamic memory allocation in C programs and report on how well the 50-percent holds. I'm giving you a skeleton prob_3.c program to help get you started.

You see that this program has an array for holding 10,000 pointers to dynamically allocated blocks of memory. It also has an array to store the size of each block. Add code to this skeleton so that it does the following:
1.Uses malloc() to dynamically allocate a block of memory for every element of ptrList. Dynamically allocated blocks should have sizes that are selected (uniformly) randomly in the range 1 to 10,000 bytes (inclusive). As you choose block sizes, store them in the sizeList array. You'll need these sizes later.
2.After the initial block allocation, your program will repeatedly free blocks and allocate new ones. Run a loop for 100,000 iterations. On each iteration, randomly choose an index in ptrList, free the block it points to and then replace it with a pointer to a new, dynamically allocated block with a random size (again, in the range 1 to 10,000 bytes).
3.Initially, and after every 100 iterations, print out a line like the ones in the sample output below. In this report, we're trying to see how much memory is wasted due to fragmentation. For each report, print the iteration count, an approximation of the heap size, the total size of all blocks pointed to by ptrList and then the fraction of the heap size that's being used by the blocks currently allocated. We'll approximate the heap size by looking through the list of blocks in ptrList and examining the range of memory addresses. Find the lowest and highest memory addresses contained in any of the blocks. Use the difference between these addresses as a measure of the heap size.

Sample Output
Your program output should be formatted like the following. Of course, the particular values you get will probably be different from the ones here.
0 98606510 49833184 0.505374
100 98783783 49812373 0.504257
200 98841800 49817149 0.504009
300 98924453 49809878 0.503514
400 98972970 49751727 0.502680
500 98972970 49721901 0.502379
600 98972970 49714174 0.502301
700 99088195 49717314 0.501748
800 99122525 49699444 0.501394
...

Results Plot
Run your program on one of the EOS Linux machines, and write its output to a file. Download the file and generate a plot (prob_3.pdf) of the fraction of heap memory used by dynamically allocated blocks (the Y axis) against the iteration count (the X axis). I used Microsoft Excel to generate the following plot, but you can use other tools if you're more familiar with them. You just need to turn in a PDF of a plot formatted like the following:

Explanation
Based on your results, does the memory allocation behavior of your program match the 50-percent rule? Give your answer and a brief explanation in the file "prob_3.txt". Also include a copy of the last line of output from your program.

How I Made My Plot
Here's what I did to make a PDF of a plot like the one above. If you want to make your plot this way, the following steps or some simple variation of them will probably work for you.
1.Ran my program and wrote its output to a file: ./prob_3 > output.txt
2.Downloaded the output.txt file to my desktop machine.
3.Opened the output file in Microsoft Excel. It walked me through a series of dialogs for importing the text file as a spreadsheet. The only thing I had to do was choose space as a delimiter.
4.Selected the D column and then chose a line graph from the charts tab. This gave me a plot of the fraction of used heap space against line number.
5.To change the X axis for the chart, I chose "Select Data" from the right click menu on the chart. Then I typed "=output.txt!$A:$A" into the X axis field. (There's a graphical way to get this done also).
6.Now that my chart looked the way I wanted, I right clicked on it again and choose "Save as Picture". This let me write it out as a PDF.

Extra Credit
Do you think more uniform block sizes will increase or decrease fragmentation? Let's see. For 10 points of extra credit, change your program so that dynamically allocated block sizes range from 9,001 to 10,000 bytes in size. Generate new results and describe them in the prob_3.txt file. Include a copy of the last line of output from your modified program and give a reasonable explanation for any change in your program's output.

Problem 4: Paged Memory Simulation (25 pts, submit "prob_4.txt")
Consider virtual memory support through paging with a TLB. The virtual address consists of 18 bits (256KB). The physical memory size is 64KB and the page/frame size is 4KB. Using the page table and TLB shown below, please provide 1) the calculated physical address; 2) indicate changes to the page table and TLB, 3) the effect on a memory reference (page fault *and* completion or access violation) for each of the following virtual memory references. Also indicate when a TLB miss occurs; you should use FIFO replacement as your TLB replacement policy (starting with the first entry).

(a) 000010000100000101 (read access)
(b) 000011000011000100 (write access)
(c) 000001000101100100 (write access)
(d) 000010001100000011 (write access)
(e) 000000000101010100 (read access)

Page numbers are encoded in the most significant bits. The status bits (valid, dirty, read only, use) are 1 if true, 0 otherwise. If a page fault occurs, the corresponding page will be loaded into memory with write access (at the indicated frame number).

Hints:
dirty bit: After being modified, the dirty bit of the corresponding frame is marked as "1"
used bit: After most-recently access to a frame, the used bit of this frame is marked as "1"
For reference, your answer may take the following form:

000100001100000000 ( read access )

--> 000100 001100000000
page number is 4, which is mapped to frame 3 in the physical memory
TLB miss
Physical address = 000011 001100000000
paging completion
TLB replacement occurs: replace entry "virtual page 2" with the new entry "virtual page 4"

Problem 5: Virtual Memory (30 pts, submit "prob_5.c")
For this problem, you get to complete the implementation of a program that simulates a virtual memory system. As your program processes memory access requests, it will keep up with which pages are resident in physical memory and which parts of the page table are cached in a Translation Lookaside Buffer (TLB). Your program will read and process a trace of memory references, keeping up with how many page faults occur and how many TLB misses occur. It takes as command-line parameters the number of entries the TLB can store, the total number of available memory frames, and the name of a memory reference trace file containing the trace of memory accesses made by a collection of processes. Your program will implement an LRU policy for both TLB replacement and page replacement.
You can think of the memory reference trace as a record of the sequence of references made by multiple processes. Lines of the file indicate either a page reference or a context switch between processes. The format of the line indicates which:
●integer — page reference
This indicates that the current process makes a reference to this page number in its address space. This is just a page number written in decimal, not a whole logical address.
●char char — context switch (space between 2 characters)
This indicates that the scheduler context switches to a different process. The second character is the one-letter name of the process that starts (or resumes) execution. The first character is either ‘s’ or ‘x’. An ‘s’ indicates a context switch to a different process. The letter ‘x’ indicates that the previously running process has terminated.
The following is an example of a trace file.
Every valid trace file begins with a context switch that sets the current process. In the trace above, process A makes 4 references to pages 5, 3, 1, and 6. Process B makes a total of 6 references to pages 11, 5, 12, 8, 20, and 13. There are three context switches: after A's second reference, after B's second reference, and after A terminates (issues its last reference).
We're providing a longer sample trace file (data.txt) to help you develop and test your program. To help you get started with the implementation, we're providing a partial implementation of prob_5.c that parses the trace files and gets you started with the data structures you'll need. It contains a partial implementation of a table that records the simulated contents of the TLB and another table that records the set of pages that are resident in physical memory. To complete the implementation, you will need to add some code to main(). You will also need to implement two functions for each of the tables maintained by the simulator:
●tlb_access() : update the record of what is in the TLB in response to a memory access made by a process
●resident_access() : update the record of what pages are resident in memory in response to a memory access made by a process
●tlb_flush() : discard the contents of the TLB. This is needed whenever a context switch occurs, since each process typically has its own page table for address resolution. Page table elements cached on behalf of one process won't be correct for a different process.
●resident_flush() : discard the contents of memory frames that were used by a process that has just terminated.
The partial implementation of prob_5.c already reads the command-line parameters for you and initializes the TLB and resident page data structures. You need to add code to output the total number of TLB misses, page faults, and memory references once the simulation is complete.
A sample execution of the program might look like the following:
[xxx@xxx] ./prob_5 12 34 data.txt
TLB misses = 56, page faults = 78, references = 90
Notes:
●We're assuming that all memory frames and TLB entries are initially empty. As programs run, their memory references will bring needed pages into memory.
●To simplify your implementations, you are guaranteed that page zero in a process' logical address space is never referenced. If you want to use page number zero to indicate where a TLB entry or a memory frame is empty, that will work fine for our test data. Of course, this isn't generally true for real processes, but it's fine for this simulation.
●Note that you don't actually need to know the contents of any process' page table for this program. Just assume that each process has its own page table and that no pages are shared by processes. Thus, page 5 in process A will never be the same page as page 5 in process B. This isn't generally true, but it does simplify our simulator.
●You're implementing a global page replacement algorithm. When some process page faults, the victim can be any page of any process.
●We're assuming that the TLB must be flushed on context switches because it only holds the virtual to physical mapping, which becomes incorrect when the OS context switches to a new process. It is possible to implement a TLB that works differently, but that's not what we're doing for this assignment.
●Assume that all of a process' allocated pages are freed when it terminates.
●Sample output (sample_output.txt) for the provided input is provided.
●Another sample output (sample_output2.txt) is provided.

联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!