Android/GogoTraining/Fundamentals of Operating Systems
Description
As an overview, this course touches on all of the functions within an operating system and the interrelationships between those functions. This course can be used stand-alone to gain a basic understanding of all of the concepts presented. In addition, it has been designed to follow a companion textbook, Modern Operating Systems by Andrew S. Tanenbaum, for those interested in deeper exploration of the concepts. The companion textbook may be used along with viewing this course or as a resource for later reference when more detailed information is needed.
Objectives
As a result of taking this course, you will be able to:
- Explain the overall objectives and structure of any modern operating system.
- Identify the differences and similarities between operating systems.
- Describe the functions within an operating system and how they work together.
- List what causes many operating system errors and crash conditions.
- Explain how to effectively use the more advanced operating system features to improve productivity.
- Choose which operating system approach best suits individual situations
Outline
Module 00: Course Introduction - Fundamentals of Operating Systems
Module 01: Introduction to Operating Systems
- What is an operating system?
- History of Operating Systems
- Types of Operating Systems
Module 02: Operating Systems and Hardware
- Operating Systems Concepts
- Processor and Processor Features
- Processor Pipeline and Execution
- Operating Systems and Processors
- Memory Overview
- I/O Devices Overview
- Operating Systems and Hardware Quiz
Module 03: Operating System Concepts
- Types of Operating Systems
- Services Provided by Operating Systems
- Invoking Operating System Services
- Operating System Concepts Quiz
Module 04: Operating System Structure
- Monolithic Systems
- Layered Systems
- Virtual Machines
- Other Structures
- Operating System Structure Quiz
Module 05: Processes and Threads
- The Process Model
- The Thread Model
- Implementation Techniques
- Trade-offs
- Processes and Threads Quiz
Module 06: Interprocess Communication
- Types of Interprocess Communication
- Operating System Implementations
- Add-on Implementations
- Interprocess Communication Quiz
Module 07: Scheduling
- Introduction to Operating System Scheduling
- Types of Schedulers
- Policy Versus Mechanism
- Thread Scheduling
- Scheduling Quiz
Module 08: Deadlocks
- What are deadlocks?
- Detection and Recovery
- Avoidance
- Prevention
- Other Issues
- Deadlocks Quiz
Module 09: Memory Management
- Basics of Memory Management
- Swapping
- Paging for Memory Management
- Paging for Virtual Memory
- Implementation Issues
- Memory Management Quiz
Module 10: Input/Output
- Principles of I/O Hardware
- Principles of I/O Software
- I/O Software Layers
- Types of I/O Devices
- Power Management
- Input/Output Quiz
Module 11: Files Systems
- File Systems Basics
- Files
- Directories
- Implementation
- Examples
- File Systems Quiz
Module 12: Multimedia Operating Systems
- Introduction to Multimedia Operating Systems
- Multimedia Files
- Multimedia Process Scheduling
- Multimedia File Systems
- Caching
- Disk Scheduling
- Multimedia Operating Systems Quiz
Module 13: Multiple Processor Systems
- Types of Multiple Processors
- Multiple Processor Hardware
- Multiple Processor Operating Systems
- Multiple Processor Scheduling
- Multiple Computers
- Multiple Processor Systems Quiz
Module 14: Operating System Security
- The Security Environment
- Basics of Cryptography
- User Authentication
- Security Attacks
- Protection Mechanisms
- Security Quiz
Module 15: Examples of Operating System Architectures
- Unix and Linux
- Windows
- Others
- Operating System Architectures Quiz
Module 16: Course Summary
00: Course Introduction - Fundamentals of Operating Systems
Designed to follow a companion textbook: Modern Operating Systems by Andrew S. Tanenbaum
Course Objectives:
- Explain the overall objectives and structure of any modern operating system
- Identify the differences and similarities between operating systems
- Describe the functions within an operating system and how they work together
- List what causes many operating system errors and crash conditions
- Use the more advanced operating system features to improve productivity
- Choose which operating system approach best suits individual situations
01: Introduction to Operating Systems
Operating System is an interface to hardware and resource manager.
The OS provides:
- Interface to the system's resources - provides a global view of shared devices
- Protects applications from device errors - Common standard approach to recovery
- Shared code - each program doesn't have to have on its own
- Abstract diverse hardware into common interface
- Applications are written to an O/S, not to a specific computer
- Automatic enhancements with O/S upgrades, which reduces application development costs
- Sharing resources improves productivity - many applications run concurrently
- Manages even non-shareable resources, such as printers
- Makes writing programs easier
-
Layers of a system
Hardware:
- Physical Devices
- Micro-architecture - instruction set
- Machine Language (ISA - Instruction Set Architecture)
System Programs:
- Operating system
- Utilities - compilers, editors, command interpreters
Application Programs
- Web browser, photo editor, mp3 player
-
Operating System has many names:
- Operating System
- O/S
- Kernel (popular in linux)
- Control Program (from mainframe environment)
- Master Scheduler
- RTOS (Real Time OS) - embedded systems
- DOS (Disk Operating System, Distributed Operating System)
- Executive (from mainframe environment)
-
O/S as Protection Environment
- Limits access to assigned resources
- Cannot change another program's memory
- Cannot read another program's memory
- Cannot use more resources than approves
- Stars and ends applications
- Implements security: file access control, encryption
O/S as Computer Program (different from regular applications)
- Always running
- Access to special instructions (privileged or kernel instructions)
- Direct access to hardware
02: Operating Systems and Hardware
Processor - the "brain" of a computer
- CPU (Central Processing Unit)
- Execution Unit
- Micro-processor
Processor Features
- Registers - data for quick access (memory locations built into processor)
- Program Counter - next instruction to execute
- Status - state information and controls behavior
- Privilege levels - usually kernel and user modes
Instruction Set Types:
- Reduced Instruction Set Computer (RISC)
- simple and fast instructions
- examples: SPARC, MIPS, PowerPC
- Complex Instruction Set Computer (RISC)
- instructions do a lot but are slower
- examples: intel x86, motorol 68000
Pipeline stages: fetch, decode, execute and commit
Memory -
- registers - in the processor
- cache - in the processor and/or the motherboard
- main memory - on the motherboard (usually)
- magnetic disk - internal and/or external (persistent storage)
- magnetic tape - long-term and backups
Memory speeds and sizes
- registers - 1 nsec, < 1 KB
- cache - 2 nsec, 1 MB
- main memory - 10 nsec, 64-512 MB
- magnetic disk - 10 msec, 5-500 GB
- magnetic tape - 100 sec, 20-1000 GB
Memory Protection
- Partitions
- Segmentation
- Virtual Memory (modern)
- Virtual to physical address translation
I/O Devices:
- I/O ports (eg. IN and OUT x86 instructions)
- Memory mapped - OS maps a device into a memory location
- Special instructions
- Interrupts
Device Driver
- usually developed by device manufacturers
- considered part of the OS
- using privileged instructions
- direct memory access
DMA - Direct Memory Access
- allows i/o device to access program memory directly (bypassing CPU)
- greatly speeds up data flow
03: Operating System Concepts
Server Operating Systems
Real-Time Operating Systems
- control physical environments
- hard real-time (eg. manufacturing)
- soft real-time (eg. multimedia systems)
Distributed Operating Systems
System Calls and System Libraries
- System Calls are O/S dependent
API (Application Program Interface)
04: Operating System Structure
Monolithic Systems
- OS as a single, large program
- Services are called internally as subprograms
- Early operating systems
- Faster switching between O/S functions
- No context switch overhead
- Major changes require recompiling entire O/S
- Service locations are hidden
- Most changes between versions done with patches
- Mainframe systems
Layered Systems
- O/S as many smaller programs
- Services can be called directly
- Core of the O/S is called the "Kernel"
- Mach Unix and Windows (Linux somewhat)
- Context switch overhead for every call
- Slower switching between O/S functions
- Chances by just recompiling function
- Only minor changes done with patches
- Service locations are exposed with API
Virtual Machines
- An O/S above an O/S
- Virtual Machine Monitor (VMM) - aka Hypervisor
Microkernel - very small O/S - most services as applications
- embedded systems
Exokernels - virtalization features in an O/S
- 8086 mode on Intel Pentium processors
Client-Server Model
- O/S services as server processes
RTOS - Real-Time Operating Systems
- Special features for controlling physical devices
- Sensors, valves, robots, etc.
DO/S - Distributed Operating Systems
- Parts of a single O/S on many separate computers (even in different countries)
- Example: Plan9
05: Processes and Threads
Process and Thread Models
OS must manage and control resources
-
Process - application loaded for execution
Process Control Block (PCB)
- Ownership information
- Construct within in the OS to track process information: names, process table, internal data structures, unique id called Process Identifier (PID)
- Connected files, running state, next instruction, registers and flags, processor affinity
-
Thread - "lightweight process" within a process
Task (or Thread) Control Block (TCB)
- Execution information
- Current state of thread
- One for each Thread
-
Process - The O/S unit of ownership
Thread - The O/S unit of execution
- always inside a Process - can be a one-to-one relationship, or a many-to-one relationship
-
Communication between processes is external:
- pipes, files, sockets, networks, etc.
-
Child processes can be spawned (forked). They will start off basically as copies and may have links to shared resources.
Processes Only vs Threads:
- Process has more overhead (context switching)
- Complicated application design
- No application multitasking
06: Interprocess Communication
Interprocess Communication (IPC)
Communication between Processes or Threads
Critical sections - areas of code that could conflict - need to be serialized
Concepts:
- Semaphores
- Mutexes
- Monitors
- Message Passing
- Barriers
Semaphores - mechanism for task synchronization
- it is an integer and a queue
- Two operations: P (to pass) and V (to release)
- The P and V come from the Dutch language of the Semaphore's inventor
- Type types: Cooperation and Competition
- Serializes access to a control block or a buffer
- Serializes critical sections of the process
Mutexes - simplified version of Semaphores
- Only used for mutual exclusion (not buffers)
- Two states: locked and unlocked
- Serializes access to a control block or a buffer
- Serializes critical sections of the process
Monitors - higher-level synchronization technique
- package with a collection of: procedures, variables and data structures
- all packaged within an object (internals hidden from application)
- requestors can call monitor procedures
- implemented internally with semaphores or mutexes
Message Passing - Higher-level communication technique
- For passing information between processes
- For requesting a service
- Messages are queued - doesn't have to happen in real time like semaphores/monitors
- System local or across network
- Different approach to software design
- Synchronous or asynchronous
Barriers - synchronize groups of processes
- controls progress to phases or stages
- Parallel processes working on the same problem
-
Atomic Operations
- Uninterruptible machine instructions
- Read-Modify-Write a memory location within one clock cycle
- Required in multi-processor operating systems - must have hardware support
- O/S services because no compiler guarantees
- Linux examples: atomic_read, atomic_set, atomic_add, etc.
Spin Locks
- Mutex functions to control access to
- Critical sections of code
- Control blocks
- Program loops (spins) while section locked
- Linux examples: spin_lock, spin_unlock, spin_trylock, etc.
O/S concurrency
- Allow multiple processes to run at the same time, but still have control of what is being accessed
- Read/Write Spin Locks
- Increases O/S concurrency
- Allow many readers but one one writer
Linux has two types (mutex based)
- kernel semaphores for O/S use only
- System V IPC semaphores for user mode
Pipes
- One-way flow of data between processes
- input of one directly from output of another
- Originated with Unix (all versions)
- Also in Linux, Windows and many other O/Ss
- Most use v-bar ("|") as pipe connector
FIFO (First In, First Out)
- Similar to a pipe
- But between more than two processes
- POSIX defined functionality
System V IPC
- Originally in System V Unix
- now in most versions including Linux
- In Windows with POSIX and INTERIX
- Includes:
- IPC Semaphores
- IPC Messages
- IPC Shared Memory
Add-on Implementations
- Available as Libraries and Extensions
- POSIX (Portable Operating System Interface for Unix)
- IBM WebSphere MQ (MQSeries)
- Advanced Message Queuing Protocol (AMQP)
- Amazon Simple Queue Service (Amazon SQS)
- Java Message Service (JMS)
- Microsoft Message Queuing (MSMQ)
- Distributed Computing Environment (DCE)
- Message Passing Interface (MPI)
- Message Bus (MBUS)
- Common Object Request Broker Architecture (CORBA)
07: Scheduling
Scheduling - selecting execution thread to run on a processor
- could be a process or a thread
- could be a single or a multi-processor system
- objectives: throughput, response time, fairness, priority, etc.
Scheduling Algorithm
- How the O/S selects a Process to run
- Usually with some notion of priority
- Unix focused on "Fairness" - every Process gets some chance to run
- Many allocate execution time slices
How the O/S selects the Process
- Must be in memory
- Must be in the "Ready" state
- Next based on O/S Scheduling Algorithm policy - throughput, response time, fairness, priority, etc.
When to Schedule
- A running Process completes
- A running Process does an I/O
- The time-slice expires
- A higher priority Process becomes Ready (interruption technique)
- An interrupt happens (hardware or software)
Scheduling versus Dispatching
- Scheduling is selecting the Process to run
- Dispatching is activating it on a processor
Types of Schedulers:
- Batch - throughput, turnaround time and CPU utilization
- Interactive - response time and user expectations
- Real-time - meeting deadlines and predictability
Batch Scheduling:
- First-Come First-Served
- Shortest Job First
- Shortest Remaining Time
- Three-Level Scheduling (separate algorithms for different parts of system)
Interactive Scheduling Algorithms:
- Round-Robin
- Priority (often with multiple queues)
- Shortest Process First
- Guaranteed
- Lottery
- Fair-Share (user based fairness)
Real-time Scheduling Algorithms
- Hard - absolute deadlines
- Soft - occasional missed deadline is tolerable
- Rate Monotonic
- Earliest Deadline First
Policy versus Mechanism
- Policy sets overall objectives
- Mechanism (implementation) allows changes to objectives
- example: process can lower child process priority
- user can adjust priority from default
- operator changes throughput goal to response time
08: Deadlocks
What are deadlocks?
- "A standstill resulting from the opposition of two unrelenting forces or factions." -- The American Heritage Dictionary, 3rd Edition
Standstill - some processes cannot execute
Opposition - using the same resources
Unrelenting - won't exit until resource used
Forces or factions - processes, threads or kernel
Cause of Deadlocks?
- Multiple processes need a resource - neither can do anything about it
Formal Requirements for Deadlock:
- Mutual exclusion condition - a resource is either in use or available, but not both (not share-able)
- Hold and wait condition - process can hold one resource and request use of another resource
- No preemption condition - once process has resource, it has to finish on it's own and can't be forcibly taken from the process
- Circular wait condition - circular chain between two or more processes
Strategies for dealing with Deadlocks:
- Ignore the problem - most applications don't need to worry
- Detection and recovery - kill process holding critical resource
- Dynamic avoidance - don't grant requests if they will cause deadlocks
- Prevention - strict allocation control policies
-
Banker's Algorithm
- Popular theory in O/S literature
- Resource matrices: assigned and needed
- Fails because of requirement for advance knowledge
- Fails when number of processes varies dynamically
Banker's algorithm - Wikipedia http://en.wikipedia.org/wiki/Banker%27s_algorithm
The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
For the Banker's algorithm to work, it needs to know three things:
- How much of each resource each process could possibly request[MAX]
- How much of each resource each process is currently holding[ALLOCATED]
- How much of each resource the system currently has available[AVAILABLE]
Resources may be allocated to a process only if it satisfies the following conditions:
- request ≤ max, else set error condition as process has crossed maximum claim made by it.
- request ≤ available, else process waits until resources are available.
-
Prevention - not possible in the real world, so let's attach one of the 4 criteria
Attacking Mutual Exclusion:
- Only allow a single process exclusive access to a resource
- For example, printer spooler
- Can't be applied to all resource
Attacking Hold and Wait:
- Process must request all resources before execution
- may not be known when process starts
- some resources often idle
- possible solution: Release everything and re-request for each new resource needed - very disruptive to an application
Attacking No Preemption
- most difficult to attack
- Preemption often creates bigger problems
Attacking Circular Wait
- Common approach - Assign each resource identification number
- process must allocate in numerical order
- cannot request lower numbered resource
- problems arise when different orderings needed
Other issues:
- two-phase locking - common in databases but often too costly
- non-resource deadlocks - each waiting for message from the other
- starvation - no deadlock, but process has nothing to do
- policy may cause busy system to never give resource to a process
09: Memory Management
Basics:
- keep track of memory in use and available
- allocate/deallocate memory to/from processes
- Manage program placement
- relocation, paging, swapping, etc.
Can be very simple:
- load programs into all available memory
To very Complex:
- Virtual memory paging and swapping
Types:
- Monoprogramming - one program runs
- Multiprogramming - multiple programs run
- Swapping - moving program out of memory to some other storage
- Paging - same thing as swapping but at smaller parts of a process
- Virtual memory - how to make system look like it has more memory than it actually has
Thrashing - big issue when memory is over committed
Virtual Memory Management
- small, equal sized allocation units
- non-contiguous physical memory allocation
- contiguous program address space
- physical memory relocation transparent to programs
Physical Memory Allocation
- avoids under-utilization and fragmentation
- frame - block of physical memory (usually 4kb, 8kb or larger)
- page - block of virtual memory (same size as a physical frame)
Physical to Virtual Addresses
- requires hardware support
- managed by the operating system (symbiosis relationship between hardware and operating system)
- DAT - Dynamic Address Translation
- every virtual address maps to a physical address
- memory appears contiguous to program, when in reality it is not
- done with page tables
Page Tables:
- mechanism to do address translation
- cooperatively between OS and hardware
- reason why OS are designed for specific types of hardware
- maps frame addresses to page addresses
- offset into the page matches the frame offset
Multi-Level Page Tables
- allows more operating system control
- allows for larger amount of memory
- improves security and performance
- additional value when paging implemented
TLB - Translation Lookaside Buffer
- buffer where hardware saves translations that it has done (caching)
- hardware support for address translation
- keeps recent translations in special registers
- avoids page table accesses
Paging fro Virtual Memory
- same as virtual memory but some pages written to disk
- virtual address translation to missing frame
- allows programs to use more than the physical memory
Page Fault - when we access memory that doesn't have a physical frame behind it
- frame must be read into memory before translation completes
Page In - when the O/S copies page from disk to memory because of a Page Fault
Page Out - when the OS copies page from memory to disk to free a Frame for another program
- can happen because of a page fault
- OS usually keeps a pool of free Frames
Page Replacement - selecting which page to Page Out
- objective is to pick a page not needed again
- picking the wrong page cause it to page in soon
- requires additional information for each frame
- referenced recently and modified since paged-in
Page Replacement Algorithms
- not recently used
- first-in, first-out
- second chance (intermediary pool before being paged out to disk)
- clock (implementation variation on second chance)
- Least Recently Used (LRU)
- efficient but requires some special hardware - resets reference counter on access
- used in large mainframes (MVS & zOS)
- WSClock
- no special hardware needed
- simple to implement and performs well
- uses a circular list of page frames with pointers
- used in unix and linux systems
Shared Pages
- allows processes to use the same memory - much like threads within a process
- implemented with page table entries
Issues:
- page faults - time to find available frame and to read in page
- instruction backup - instruction causing the fault must be restarted
- backing store - media behind virtual memory storage
- separation of policy and mechanism - performance versus OS maintenance
10: Input/Output
Types of I/O devices
- all controlled by a device controller
- memory-mapped I/O - there's a memory location specific to that device maintained by the OS
- Direct Memory Access (DMA)
Types:
- Block Devices - read and/or write independent blocks of data (usually hard drives)
- Character devices - data is read and/or written as a stream
- Other specialty devices - clocks, memory-mapped screens, etc
Device Controllers
- hardware that interfaces to a device
- adapter or Host Bus Adapter (HBA)
- interface between the device and system is going to be vendor specific or generalized
Memory-mapped I/O
- I/O devices that appear as memory locations
- a memory location is the interface to a physical device
- simplifies application usage
- complicates operating system - disable cache, virtual vs physical memory, buses
Direct Memory Access (DMA)
- controller accesses memory directly
- uses interrupts to signal completion
- reduces work done by processor
- increases device usage and performance
DMA Procedure:
- CPU programs the DMA controller
- DMA controller requests transfer to memory
- Data is transferred between the disk and memory
- Disk controller acknowledges when the transfer is done
- DMA controller sends interrupt to CPU
I/O Software Goals:
- Device independence
- Uniform Naming
- Error Handling
- Buffering and Sharing
Programmed I/O:
- Processor does all the work
- Send commands to device
- Sends and receives data to/from the device
- Usually byte or word at a time through ports
Interrupt Driven I/O:
- Offloads work from the processor
- Processor does other work instead of polling device
- Used with DMA to transfer large blocks of data
I/O Software LAyers
- Interrupt Handlers
- Device Drivers
I/O Stack:
- User-level I/O software
- device-independent OS software
- device drivers
- interrupt handles
- hardware
Interrupt Handlers
- System software to handle asynchronous events
- Hardware or software generated
- OS sets up interrupt handler priority
- can be a significant performance impact
Device Drivers
- system software that understands a device
- usually supplied by the hardware vendor
- many os vendors supply generic drivers
- split designs allow more device independence
- device facing and OS facing parts
Clocks
- Physical devices to measure the passage of time
- Require system software and application interface
- OS uses clocks and provides time services
User Interfaces
- Character oriented terminals
- Graphical user interfaces
- Network terminals
- Pointing devices
Power Management
- OS has to work hand in hand with the hardware
- hardware issues
- operating system issues
- degraded operation
- Advanced Configuration and Power Interface (ACPI) specification
- Reduce performance to save energy
11: Files Systems
File - collection of data
Filesystem - collection of files managed by OS laid out in folders/directories
Filesystem types: EXT2, EXT3, NTFS, FAT, FAT32, CDFS, HPFS, etc
Role of OS:
- define the API for file access
- define the structure (directories, names, etc)
- provides drivers for physical access
- may provide "mountable filesystem"
Files - abstraction to name groups of data
- attributes - owner, permissions, dates/times, etc.
- structure - implicit or explicit
- implicit - extension can tell you what type of file it is
- explicit - attribute can tell if you if it is executable
- type - implicit such as windows extensions
- access - sequential, direct, etc. (usually implicit)
- operations - create, delete, open, etc
- CRUD - create, read, update, delete
- performance - caching, organization, memory-mapped, replication, etc
Directories:
- how files are organized
- hierarchical directories common
- defines reference or "path" to files
- allows arbitrary number of levels
- can span physical disk drives on some operating systems (unix/linux)
- root directory - top of tree (unix/linux)
- drive letters per device (windows)
- MVS
- older operating systems (often mainframe)
- uses dotted names - user27.project.abc.src
- every disk volume has a table of contents - VTOC
- OS provides a catalog service
Reference Path
- how users and programs get to a file
- fully qualified - all directory level names included
- relative - assumes name in current directory
MS-DOS Directory Entry: (bytes)
- 8 - file name
- 3 - extension
- 1 - attributes
- 10 - reserved
- 2 - time
- 2 - date
- 2 - 1st block of FAT (File Allocation Table)
- 4 - size of file
Windows 98 Directory Entry:
- 8 - file name
- 3 - extension
- 1 - attributes
- 1 - NT
- 1 - Sec
- 4 - creation date time
- 2 - last accessed
- 2 - staring block 0-15
- 4 - last write date/time
- 2 - starting block 15-31
- 4 - file size
Windows 98 Long File Name Directory Entry: (FAT-32)
- 1 sequence
- 10 5 characters
- 1 attributes
- 1 "0"
- 1 checksum
- 12 6 characters
- 2 "0"
- 4 6 characters
Unix V7:
- directory entry
- points to an I-Node
- every file is associated with a single I-Node
- all file information is in the I-Node
Unix V7 Directory Entry:
- 2 - I-node number - points to data on disk
- 14 - file name
12: Multimedia Operating Systems
Multimedia
- usually large video and audio files or streams
- uses extremely high data rates
- requires real-time (or nearly) playback
File types: JPEG, MPEG, MP3, etc
Performance:
- File placement to avoid head movement
- Striping across multiple disks for performance
- enhanced caching and buffering
Some organized with multiple tracks
- video, audio, subtitles and playback controls
Encoding technique trade-offs - file size vs quality
Compression increases load on processor
Multimedia process scheduling
- consistent more important than fast (smooth)
- deadlines must be met to insure quality playback
- Two common methods:
- RMS simpler to implement but EDF works at higher processor utilizations
Rate Monotonic Scheduling (RMS)
- simpler
- fixed priority based on frequency of triggered events
- assumes each process needs the same processor time for each event
- guaranteed to work only if utilization < (70%) 0.6931 (in 2)
Earliest Deadline First Scheduling (EDF)
- suited for more dynamic content
- dynamic priority based on deadline of next triggered event
- assumes each process announces processor time needed for each event
- can achieve higher processor utilization
Caching - pre-read data from disk into memory
- eliminate disk access time as much as possible
- Usual LRU (Least Recently Used) technique doesn't work well with multimedia, because no repeat (unless rewind)
- access patterns linear (streaming) instead of random
13: Multiple Processor Systems
Types of Multiple Processors:
- Symmetric Multi-Processors (SMP)
- Asymmetric Multi-Processors (AMP)
- Shared Memory Multi-Processors
- Shared Disk Multi-Processors
- Shared Nothing Multi-Processors
Symmetric Multi-Processors (SMP):
- common design with modern systems
- all processors are identical
- usually connected by the system bus
- called "tightly-coupled" system (processors to system bus to memory)
- specific processor transparent to programs
Asymmetric Multi-Processors (AMP):
- some processors have different ISA (Instruction Set Architecture)
- specialized processor (not generalized)
- examples: math co-processors (80x87), Graphics Processing Unit (GPU), encryption processors
Shared Memory Multi-Processors:
- each processors has access to all system memory
- OS can dispatch any process on any processors
- some performance issues
- each OS version designed to support some maximum number of processors
Shared Disk Multi-Processors:
- seperate computers connected to common disk drives
- OS must manage file locking and access permissions
- Storage Cluster, NAS and SAN
Shared Nothing Multi-Processors:
- Separate computers connected only by communications
- OS provides communicaiton services
- application responsible for locking and coordination
- Computer Clustering
Multi-core - on same chip
- Uniform Memory Access (UMA) - all memory access equal on all processors
- all memory access take the same amount of time
- usual when all processors on one system motherboard
- Non-Uniform Memory Access (NUMA) - not all memory access is equal on all processors (some faster/slower on different processors)
- some memory access takes longer to get to
- varies from one processor to another
- OS needs to schedule process on processor where memory is located
- cache used to mitigate delays
- System Bus (Interprocessor Communication and Cache Coherency)
- OS sees symmetrical processors
Scheduling complexity issues:
- which processor to run it on
- CPU affinity
- implicit processor affinity to reduce cache reloads
- explicit processor affinity at program request
- multiple threads running on different processors
- OS cannot swap part of a process (umm... paging?)
- threads may access different areas of the process address space
- causes cache "pollution"
Multiple computer clustering interconnection technologies:
- switch
- ring
- grid or mesh
- double torus
- cube
- hypercube
Distributed Shared Memory (DSM)
- give application illusion of shared memory using separate computers
- simplifies application design but complicates OS
14: Operating System Security
Security Environment:
- Threats - what can go wrong
- Intruders - who is behind the threat
- Accidental Loss - same result but different reason
Threats:
- data integrity - tampering with data
- denial of services - system and data availability
Intruders:
- passive intruders - read data
- casual snooping - curiosity or personal gain
- active intruders - modify data
- financial or personal gain - change data
- sabotage or terrorism - destroy data
Accidental Loss:
- operator or human involved - not intended, but has same effect as other threats
- acts of god, fire, flood, earthquake, etc
- hardware or software errors
- human errors
Basics of Cryptography
- conversion of plaintext to ciphertext
- secret-key - requires keeping shared key safe
- public-key - safer but more resource intensive
- protect data and establish identity
User Authentication: - Three general principles:
- something the user knows: passwords, pins, etc
- something the user has: token, cardkey, etc.
- something the user is: voice, fingerprint, iris, etc
Security Attacks:
- Trojan Horses - malicious code inside attractive application
- Logon Spoofing - collecting user information with phony login screen
- login sequence that cannot be spoofed (ctl-alt-del)
- Logic Bombs - malicious code inside critical applicaion
- Trap Doors - code to bypass authorization or authentication
- Buffer Overflow - overflowing buffer to modify in memory code
- enforce memory protections
- use standard I/O functions with buffer bounds checking
Protection Mechanisms:
- Protection domains
- OS defined - often user and group based
- set of (object, rights) pairs
- object identifies resources
- rights identify operations that can be performed
- access control lists (ACL)
- associated with resource or object
- identifies domains with access to object
- defines what that access is
- simplifies checking when access requested
- capabilities (similar to ACLS)
- entity that grants the owner rights to a resource
- associated with each request for access
- often in kernel or encrypted in user space
- common in distributed systems
- multilevel security
- used when higher security needed
- military, hospital, financial, etc
- each entity is assigned a security level
- controls information flow with rules about reading and writing to different level
15: Examples of Operating System Architectures
Unix and Linux
- goal: interactive systems for multiple users
- initially designed for programmers
- expectation of experienced users
- focused on "fairness"
- interfaces
- system call interface - access to OS
- library interface - standardized for languages
- user interface - add-on programs
- Processes
- have parent/child relationship
- thread implementation varies by version
- Linux differs from Unix
- Scheduling
- priority based queues for fairness
- processor usage decreases priority
- time increases priority
- Linux adds real-time scheduling
- Memory Management
- current versions: virtual memory with paging
- child process gets copy of parent's memory
- File Systems
- single root to entire file system tree: root "/"
- additional disk drives mounted into tree
- some directory names required by convention (/etc, /bin, /dev)
- supports long case-sensitive file names
- extensions (eg ".txt") only informational
- each file has attributes, including owner
Windows:
- goal: single user system
- currently used for servers, but still with some single user focus (eg. login to main console to administer)
- interfaces
- everything is an object
- user interface through built-in GUI
- other interfaces for "subsystems"
- win32 (windows 3.x), posix, os/2, etc
- all information centralized on The Registry
- Processes
- all processes are independent
- can have multiple OS managed threads
- process management system calls are the same across all versions
- Scheduling
- no central scheduler - threads enter kernel mode and run the scheduler
- thread sort of self dispatches
- threads are scheduled independently
- priority based but focused on user responsiveness
- Memory management
- virtual memory with paging and swapping
- os kernel is mapped into each process address space
- page fault and replacement tied to Intel Pentium architecture
- other processor manufactures (like AMD) have copied or emulated to support
- File Systems
- FAT-16, FAT-32, NTFS, HPFS, etc.
- each drive has drive letter
- server has single tree for servers (like unix)
- support long case-aware file names (unix is case-sensitive)
- extensions (eg ".txt") identify associated program
- each file has an ACL
MVS (mainframe zOS):
- goal: batch through-put
- prioritize work using business requirements
- starvation is by design - meaning a low priority task may never run on a busy system
- majority of OS code for error checking and recovery
Embedded Systems:
- goal: small and customizable for special environment
- often based on version of Unix or Linux
- usually include real-time features
Distributed Operating Systems:
- single OS image across multiple physical systems
- shared resources
- highly redundant environment
- too complex for most commercial usage
- concepts implemented in other areas such as databases