LOADING

加载过慢请开启缓存,浏览器默认开启

COMP310_SynchroP

Synchronization Primitives 同步原语

Concurrency

Purpose

The motivation for leading into concurrency:

  • CPU trend: Same speed, multiple cores <- destined by Moore’s Law
  • Goal: write applications fully utilize many cores
Option:
  • Multiprocess
    • apps based on many communicating process
    • communicate through message passing -> no shared memeory
    • Pros: One process crashing not affect other processes
    • Cons: High communication overloads & Expensive context switching
  • Multithread
    • multiple threads in a process
    • communicate through memory sharing(shared address space)
    • Pros: High efficiency
    • Cons: One crash whole crash
Non-Determinism:
  • Concurrency leads to non-deterministic results -> depends on CPU scheduler
    • Different results with same inputs
    • Race Conditions

Thread

Def

The smallest controllable unit of the OS. The essential part of thread is data(memory) sharing.

Difference to Process
  • Processes provide seperation -> suitable for coarse interaction
  • Threads do not provide seperation -> suitable for tighter integration
Shared Data
  • Advantage: Many threads can read/write it
  • Dis: Data racing
Dara Racing
  • Unwanted access to shared data
  • result of interleaving of thread executions
  • program must be correct for all interleavings

Data racing is one of the major causes of non-deterministicity of concurrency.

Synchronization Primitives:

implemented via hardware support in kernel mode

SoftwareHardware
  • Monitors
  • Semaphores
  • Locks(mutex)
  • Condition Variables
  • Loads
  • Stores
  • Test&Set
  • Disable Interrupts
Atomization(Mutual Exclusion)

The aforementioned problem generates the following basic approach to multithreading with the solution:
The core is atomization(mutual exclusion of critical section).

  1. Divide work among multiple threads
  2. Shared data -> Put shared data access in critical section(atomization)
Purpose:

Prevents simultaneous access to a shared resource(memory region, …);solving critical section problem.

We need library support(pthreads) to achieve mutual exclusion.

POSIX Thread Libraries

  • Thread API for C/C++
  • User-level library:
    • #include<pthread.h>
    • Compile and link with -pthread
  • Support for thread creation, termination, synchronization

pthread_create(); pthread_exit(); pthread_join()

Pthread_Locks:
##Notice that the lock itself is a shared variable
Pthread_mutex_lock(mutex)
If lock is held by another thread, block; otherwise, acquire the lock and proceed.
This actually means the condition of the lock is always checking. System resource is in use.
Pthread_mutex_unlock(mutex)
Release the lock

Deadlocks

Threads are stuck waiting for blocked resources and no amount of retry will help.

Condition Variables

Conditional Variable is a sychronization primitive which is a bit complex than lock.

  • Used when thread A needs to wait for an event done by thread B
  • A waits until the certain condition is true
    1. Test the condition
    2. If the condition is not true, call
      pthread_cond_wait() -> causes A be blocked until the condition is true
  • At some point B makes the condition true
    1. Then B calls pthread_cond_signal() ->unblocks A

Condition variable is a shared data, so introduce lock(mutex) here.

Conditional Variables Interface
pthread_cond_init(pthread_cond_t *cv, pthread_condattr_t *cattr)

  • Initialize the conditional variable, cattr can be null

pthread_cond_wait(pthread_cond_t *cv, pthread_mutex_t *mutex)

  • Block thread until condition is treu, and automatically unblock mutex

pthread_cond_signal(pthread_cond_t *cv)

  • Unblock one thread blocked by the conditional variable at random

pthread_cond_broadcast(pthread_cond_t *cv)

  • Unblock all threads blocked by the conditional variable
Advantage

A waits(sleeps) until a certain condition is true; CPU is not busy(not many resources is used).

Two Problems for Naive Conditional Variable and the Solution

Check the lecture.

Semaphores

#include<semaphore.h>
Semaphore is a shared, non-negative counter with two primary operations: wait(attempts to decrement the counter; blocks when the counter is 0),post/signal(attempts to increment the counter).

Notice that the increment and decrement in semaphore are atomic.

Semaphore value can be initialized upon creation to a positive value, or 0.

Uses
  • Mutual exclusion:
    A semaphore with its counter initialized to 1 is equivalent to a lock.
  • Bound the currency
    Initialized to a larger value to bound the # of threads out of N to proceed.
  • Consumer-Producer Problem
interface

int sem_init(sem_t *sem,int pshared, unsigned value)
int sem_post(sem_t *sem)
int sem_wait(sem_t *sem)


Hardware Implementation of synchronization

Problem Specifications
  • Safety(no tresspass of protected space)
  • Liveness(achieved final goal)
Lock Implementation

Check lecture for detailed notes.

Store & Load

Treat store as a signaling method to indicate the running status of threads; using load to determine if run or not based on status.
Disadvantage: busy_waiting

Disable Interrupts

Thread uses the syscall of LOCK to disable the interrupter in kernel, renable interrupter after finishing.
Adv: Compared to store&load, thread can be hypnotized