COMP 3511: Lecture 17

Date: 2024-10-31 15:06:38

Reviewed:

Topic / Chapter:

summary

❓Questions

Notes

Algorithms (cont.)
  • Resource-request algorithm for process

    • : request vector for process
      • : : wants instances of resource type
    • steps
      1. d
        • verify validity / legitimacy
      2. d
        • verify availability
      3. d
        • verify availability
      • basic idea:
        • check validity
        • verify availability
        • execute
        • deallocate resources upon termination
    • 👨‍🏫
      • safe sequence: may not be unique
    • d
  • Examples

Deadlock Detection
  • Deadlock detection

    • deadlock prevention / avoidance: expensive!
    • thus system may provide:
      • algorithm examining state of system
      • to determine: a deadlock can occur
      • algorithm to recover from deadlock
  • Single instance of each type

    • maintain: wait-for graph
      • nodes being processes
      • if : waiting for
    • periodically invoke algorithm to detect a cycle
      • as cycle iff deadlock for this case
      • detecting cycle: runs in time
        • : no. of vertices
    • wait-for graph: not applicable for system with multiple instances of same type
    • 01_rag_and_wait_for
  • Several instances for a resource type

  • Detection algorithm

    • steps
      1. d
    • algorithm: requires operations to detect system in deadlocked state
  • Example

    • d
    • d
  • Detection-algorithm usage

    • when & how often should the algorithm run?
      • depending on likeliness of deadlock
      • d
    • invoking deadlock
    • d
  • Recovery from deadlock: process termination

    • abort all deadlocked process
    • abort one process at a time, until deadlock cycle is eliminated
    • in what priority?
  • Recovery from deadlock: resource preemption

    • successively preempting some resources from processes
      • and give the resource to other processes until deadlock cycle is broken
    • selecting a victim
    • rollback
    • starvation
    • d
    • 👨‍🏫 some resources cannot be compromised: e.g. semaphore
      • others, like memory, can be saved into HDD and loaded later
  • Remarks

    • in PC: restart computer or terminate some process
      • thus doesn't run deadlock avoidance
    • yet, our focus is on mainframe computers
      • were computation is done as a service
Main Memory
  • Background

    • main memory: central to operation of computer systems
      • especially for Von Neumann!
    • memory: consists of a large array of bytes
      • each w/ its own address: byte-addressable
    • program: brought into memory
      • then placed within a process for run
      • i.e. PCB: points to address space of process
    • typical instruction execution cycle: include
    • memory management unit / MMU: only sees stream of addresses
      • not knowing how it's generated
        • e.g. instruction counter, indexing, etc.
      • only interested in ins. of memory addresses generated by a running program
    • main memory (+cache) and registers: only storage w/ CPU's direct access
    • MMU: sees steam of addresses, read requests, or write requests and data
    • memory protection: required to ensure correct operation
      • must ensure: OS memory space is protected from user processes
        • early DOS system didn't thus crashed often
        • minimum protection
      • as well as one user process from one another
      • protection: shall be provided by hardware for performance / speed
  • Per-process memory space: separated from each other

    • separate per-process
    • legal range of address space: defined by base and limit registers
      • d
    • most basic memory protection!
    • d
    • hardware address protection
      • runs: ≥ base && < base + limit
        • if false: results in addressing error
  • Address binding

    • 👨‍🏫 multiple methods existed historically
    • usually: program resides in disk as a binary executable
    • most systems
  • Multi-step processing of a user program

    • address binding: can execute in 3 different stages
      • compile time
        • 👨‍🏫 because MS-DOS only ran one process!
      • link / load time
      • execution time
        • started doing so 30 years ago
      • 02_running
  • Address translation and protection

    • in uni-programming era: no need for address translation
    • early stage of multi-programming
  • Logical vs. physical address space

    • address space:
      • all addresses a process can "touch"
      • each process: w/ their own unique memory address space
        • kernel address space
        • address space: starts from 0 (logically)
    • thus: two views of memory exist
      • logical address: generated by the CPU / aka virtual address
      • physically address: address seen by the memory
    • translation: makes protection implementation much easier
      • ensures: process cannot access another process's address space
    • compile-
  • Memory management unit

    • MMU: hardware mechanism: maps virtual address to physical, at runtime
    • many different mapping methods: to be covered
    • simplicity: consider relocation register
    • d
Contagious Memory Allocation
  • Contiguous allocation

    • early memory allocation methods
    • 03_MMU
    • main memory: divides into two partitions
    • one for OS, another for user processes
    • many OS: place the OS in high memory
    • each process: contained in a single section of memory
      • that is contiguous to section containing the next process
    • multiple-partition allocations
      • degree of multiprogramming: bounded by no. of partitions
      • variable partition sizes (depending on a process' needs)
      • hole: block of available memory - scattered throughout memory
      • when a process arrives: allocated memory from a hole
        • s.t. it can reside contiguously
      • process exiting: frees its partition
        • adjacent free partitions: combined
      • OS: maintains information about:
        1. allocated partitions
        2. free partitions (hole)
      • problem: sufficient holes might not exist, although sum of them are big
        • i.e. external (from process's view)
      • 04_diagram
  • Dynamic storage allocation problem

    • satisfy: request of (variable) size from list of free holes?
    • first fit: allocate the first hole that is big enough
    • best fit: allocate the smallest hole that is big enough
      • must search entire list
    • worst fit: allocate the largest hole that is big enough
      • must search entire list
      • produces: largest leftover hold
        • intended for: reusing remaining hole
    • best & worst: name doesn't reflect their value!
    • yet, experiments have shown: first & best fit are better than worst fit
      • in deceasing time & storage utilization
      • generally, first fir is better
  • Fragmentation

    • external fragmentation:
    • internal fragmentation:
    • reducing external fragmentation:
    • problem: address space being contiguous
Segmentation
  • Segmentation: user view

    • 👨‍🏫 simple extension of contagious
    • MMS: supports user view of memory
    • program: a collection of segments (logical unit)
    • e.g. main program, procedure, function, method, stack, etc.
    • 05_segmentation
    • 06_segmentation_logical_physical
    • problem: still exists, yet less severe
  • Segmentation architecture

    • logical address: now consists of a two-tuple
    • segment table: