본문 바로가기
학교/CPL

CPL 5: Parallel and Concurrent

by Hongwoo 2024. 4. 9.
반응형

목차

    Single Core Concurrency

    Multiprogramming allows a single-core processor to run multiple programs in a rotation principle. Therefore, multiple programs can be loaded in memory and each will take turn executing instructions on the CPU. 

    2 multiprogramming concepts: 

    Multi-user system: Multiple users that share a single-core machine. The single core processor is going to allocate time to every terminal to execute their operations. Therefore, it is going to appear for every user that they are able to use the system independently, although it is shared.

    Multi-tasking system: With multi-tasking a single-core system is able to run multiple programs simultaneously by dedicating CPU time to every program without the need to discard their memory. Each of the programs running will not intervene with one another. This is achieved by switching process execution on the CPU from program to program.

     

    Advantages:

    - Allows the execution of multiple applications simultaneously

    - Total time to execute a job is reduced since there is prioritization in place

    - CPU usage is more optimized, since the CPU can have less idle time

    - Multiple users can use a single system at the same time

    - Programs can share resources: CPU, memory, I/O accesses

    - If one program crashes, the entire system will not crash - hence program execution can be independent

     

    Disadvantages:

    - It requires a CPU scheduling algorithm to exist, which is able to assign execution time to every process

    - Processes requiring longer execution time may get starved, since they are not prioritized by the algorithm

    - A user cannot interact with their program while it is executing a task

    - Synchronization can become difficult in the case that programs need to access shared memory

    - Security risks - if one program is compromised, this can affect other programs as well, especially if resources are shared

    - There can also be hardware complexity to allow multi-programming

     

     

     

    Concurrency Problems

    Unsafe Transactions

     

    package weblab;
    
    class Solution {
    
        public static int main() {
            Account account = new Account(1000);
    
            // Initialize threads for credit transactions
            CreditThread t1 = new CreditThread(account, 150);
            CreditThread t2 = new CreditThread(account, 200);
            CreditThread t3 = new CreditThread(account, 1010);
    
            // Initialize threads for debit transactions
            DebitThread t4 = new DebitThread(account, 120);
            DebitThread t5 = new DebitThread(account, 10);
            DebitThread t6 = new DebitThread(account, 750);
    
            // Start all threads
            t1.start();
            t2.start();
            t3.start();
            t4.start();
            t5.start();
            t6.start();
            
            return account.balance;
        }
    
        static class CreditThread extends Thread {
            Account account;
            int credit;
    
            // Initialize CreditThread with the account (shared across all threads)
            // and the amount to be credited to the account
            public CreditThread(Account account, int credit) {
                this.account = account;
                this.credit = credit;
            }
    
            // Override the `run()` method from class Thread
            @Override
            public void run() {
                this.account.executeTransaction(credit);
            }
        }
    
        static class DebitThread extends Thread {
            Account account;
            int debit;
    
            // Initialize DebitThread with the account (shared across all threads)
            // and the amount to be debited from the account
            public DebitThread(Account account, int debit) {
                this.account = account;
                this.debit = debit;
            }
    
            // Override the `run()` method from class Thread
            @Override
            public void run() {
                this.account.executeTransaction(-debit);
            }
        }
    
        static class Account {
            int balance;
    
            public Account(int balance) {
                this.balance = balance;
            }
    
            public void executeTransaction(int amount) {
                this.balance += amount;
            }
        }
    }

     

    1. What is the final balance you expected after all the transactions in the main() method are executed? Does the output match this calculated amount?

    1480. No it does not match this expected amount. 다 더하고 뺀 값.

     

    2. Is the output the same each time, i.e. is the program deterministic?

    No

     

    3. What concurrency issue(s) can you identify in the code snippet and why are these issues undesirable?
    There is a race condition because the += operation is not atomic. It actually consists of 3 operations. For example x += 5 actually represents x = x + 5 which is executed in three separate steps:

    read x
    add 5 to that value
    write this value to x
    If there are two threads (A and B) both updating the value, then the following ordering will yield an incorrect result:

    A1, B1, A2, A3, B2, B3

    If we start with x = 10 and both threads are attempting to add 5, then our final result with this ordering will be 15:

    A reads x -> 10
    B reads x -> 10
    A adds 5 -> 15
    A stores 15 in x
    B adds 5 -> 15
    B stores 15 in x
    The more threads we add, the more likely these issues are to occur. If we run the code on an actual multi-core system, the chances of overlapping operations are also more likely. 

     

    4. How would you approach resolving these issues? 
    These could be resolved by treating the account as a critical section that only a single thread can access at a time (locks). In this way, only one change is made at a time. We could also use the join() method of the Thread class to wait till the current thread dies before moving on to the next. 

     

    5. Is there a downside to your proposed solution?
    Adding a lock around all accounts means a lot of extra memory used (for all the locks), and means that operations have t0 wait for each other. This could impact performance. 

     

     

    Deadlocks

    package weblab; 
    
    class Solution {
    
        static class Thread1 extends Thread {
            private Object sharedObject1;
            private Object sharedObject2;
    
            // initialize the thread with the shared objects
            public Thread1(Object sharedObject1, Object sharedObject2) {
                this.sharedObject1 = sharedObject1;
                this.sharedObject2 = sharedObject2;
            }
    
            // override the `run` method since `Thread1` extends from `Thread` class
            @Override
            public void run() {
    
                // This thread will lock the `sharedObject1`
                synchronized(sharedObject1) {
                    System.out.println("Thread 1: Object 1");
    
                    try {
                        Thread.sleep(100);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
    
                    // This thread will try to lock the `sharedObject2`
                    // However, the `sharedObject2` is already locked by another thread.
                    synchronized(sharedObject2) {
                        System.out.println("Thread 1: Object 2");
                    }
                }
            }
        }
    
        static class Thread2 extends Thread {
            private Object sharedObject1;
            private Object sharedObject2;
    
            // initialize the thread with the shared objects
            public Thread2(Object sharedObject1, Object sharedObject2) {
                this.sharedObject1 = sharedObject1;
                this.sharedObject2 = sharedObject2;
            }
    
            // override the `run` method since `Thread2` extends from `Thread` class
            @Override
            public void run() {
    
                // This thread will lock the `sharedObject2`
                synchronized(sharedObject2) {
                    System.out.println("Thread 2: Object 2");
    
                    try {
                        Thread.sleep(100);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
    
                    // This thread will try to lock the `sharedObject1`
                    // However, the `sharedObject1` is already locked by another thread.
                    synchronized(sharedObject1) {
                        System.out.println("Thread 2: Object 1");
                    }
                }
            }
        }
    }

     

    2 shared objects are created, using which 2 threads are initialized (t1 of type Thread1 and t2 of type Thread2). 
    The threads are started using the start() method.
    t1 first locks sharedObject1, as part of the outermost synchronized block in its run() method. 

    The system prints "Thread 1: Object 1" to the console. t2 does the same with sharedObject2. 

    The system prints "Thread 2: Object 2" to the console.
    Since there is a nested synchronized block, t1 tries to lock sharedObject2, which was already locked by t2. 

    Hence, t1 waits for t2 to release the lock. 

    Meanwhile, t2 attempts the same with the sharedObject1 locked by t1.
    This leads to a deadlock situation as each of t1 and t2 are waiting for the other thread to release the lock, causing the test to time out. 

    Therefore, a "Thread 1: Object 2" and "Thread 2: Object 1" are never printed.

     

    → Lock ordering, threads always acquire locks in a certain order

    If all users of a resource lock on the same set of conditions in the same order, your system is deadlock free.

     

     

     

    Concurrency Solutions

    package weblab;
    
    class Solution {
        
        public static void main() {
            RunnableTask task = new RunnableTask();
            
            // Create two threads that have access to the same object
            Thread thread1 = new Thread(() -> task.run(100));
            Thread thread2 = new Thread(() -> task.run(2));
            
            // Start the threads
            thread1.start();
            thread2.start();
        }
        
        static class RunnableTask {
            
            protected int sharedInt = 0;
    
            public void run(int increment) {
                for (int i = 0; i < 10; i++) {
                    sharedInt += increment;
                    System.out.println("Thread " + Thread.currentThread().getId() + " is running with sharedInt = " + sharedInt);
                }
            }
        }
    }

     

    Suppose that we start multiple threads that have shared memory access to an object RunnableTask with a variable sharedInt. Without synchronisation enabled, each thread will be modifying the value of sharedInt in their own way, depending on when they were scheduled to be executed. 

    This can introduce race conditions and incorrect output. We get different results every time and the value of sharedInt fluctuates in unexpected manner. 

     

    Add synchronized keyword: no other thread would be able to interact with that object and thus the threads will not interfere with one another. 

     

    If two threads attempt to modify a value at the same time, we may get incorrect results and lost updates → Race condition: the output of an application depends on the sequence of timing of individual instructions.

     

    The statement sharedInt += increment consists of 3 operations: 

    Retrieve the value of sharedInt;
    Calculate sharedInt + increment;
    Store the result of addition in sharedInt;
    During execution any thread can be at any of the 3 steps above. Therefore, a thread can read an incorrect value or overwrite a correct value with incorrect value.

     

     

    Read Write Locks

    The synchronized keyword is a very useful tool to do locking. However, because it is block based there are certain things you cannot represent with it. For example, you cannot represent complex lock-unlock sequences.

    Locks in Java also allow you to specify fairness, if you want to ensure that requests for the lock are served in the order in which they arrive.

     

    package weblab;
    
    import java.util.concurrent.locks.*;
    
    public class Solution {
        private ReadWriteLock lock = new ReentrantReadWriteLock();
        private Database db = new Database();
    
        public Object get(int id) {
            lock.readLock().lock();
            try {
                return db.get(id);
            } finally {
                lock.readLock().unlock();
            }
            
        }
    
        public void add(int id, Object obj) {
            lock.writeLock().lock();
            try {
                db.add(id, obj);
            } finally {
                lock.writeLock().unlock();
            }
            
        }
    
        public Object remove(int id) {
            lock.writeLock().lock();
            try {
                return db.remove(id);
            } finally {
               lock.writeLock().unlock(); 
            }
            
        }
    }

     

     

     

    Atomicity

    package weblab;
    
    import java.util.concurrent.atomic.AtomicInteger;
    
    class Solution {
        
        public static void main() {
            RunnableTask task = new RunnableTask();
            
            // Create two threads that have access to the same object
            Thread thread1 = new Thread(() -> task.run(100));
            Thread thread2 = new Thread(() -> task.run(2));
            
            // Start the threads
            thread1.start();
            thread2.start();
        }
        
        static class RunnableTask {
            
            protected AtomicInteger sharedInt = new AtomicInteger(0);
    
            public void run(int increment) {
                for (int i = 0; i < 10; i++) {
                    sharedInt.addAndGet(increment);
                }
            }
        }
    }

     

     

     

    Safety and Intention

    private Lock l = new Lock();
    
    private void doNetworkStuff() throws Exception { ... }
    
    public void doStuff() throws Exception {
      l.lock();
      doNetworkStuff();
      l.unlock();
    }

     

    Is this implementation safe for concurrent use? If so, explain why. If not, explain how to fix it. 

    Not safe. The doNetworkStuff method indicates it can throw exceptions (which is propagated to the doStuff method. This means that after locking, we could get an exception, after which we do not unlock the lock. This means that all future calls to doStuff will wait for a lock to get released which never will.
    To fix it, use e.g. Java’s try-finally blocks:

     

     

    class MyAtomicCounter {
      public int counter = 0;
    
      // One operation to ensure atomic
      public void increment() { counter++; }
    
      // One operation to ensure atomic
      public int get() { return counter; }
    }

     

    Is this MyAtomicCounter safe to be used concurrently? Explain why (not).

    No, ++ is not an atomic operation. It is actually 3 operations:

    Read value
    Add 1
    Store value
    If multiple threads increment at the same time, some increments may not have an effect.

     

     

     

    private RequestDatabase requestDb = (...);
    
    private Request getNextRequest() {
      return requestDb.getFirstUnassignedRequest();
    }
    
    private void assignRequestToTA(Request r, TA t) {
      request.setAssigned(t);
      requestDb.save(request);
    }
    
    public void handleNextRequestButton(TA t) {
      Request r = getNextRequest();
      synchronized (this) {
        assignRequestToTA(r, t);
      }
    }

     

    All methods which modify the queue are synchronized to ensure that modifications do not interfere with each other. For performance reasons, the read operations (size and first) do not synchronize. 

    [Exercise]: Is this MyQueue safe to be used concurrently? Explain why (not). 

    Assuming the requestDb can not be read from while it is being written to, then the implementation is definitely unsafe.

    The implementation is not correct regardless, because it is still possible for 2 TAs to be assigned the same request. That is because the picking of the request to assign TAs to happens before the synchronized block. 

    To fix it, extend the synchronized block around the getNextRequest as well.

     

     

    Sharing memory between concurrent transactions

    Deadlocks and how to avoid them

    Race conditions

     

    반응형

    '학교 > CPL' 카테고리의 다른 글

    Lecture 12-13: Objects  (0) 2024.04.10
    Lecture 11: Type Checking  (1) 2024.04.06
    Lecture 9-10: Parallel and Concurrent  (0) 2024.04.06
    CPL 4: Mutation and State  (0) 2024.04.05
    Lecture 7-8: Mutation and State  (0) 2024.04.01

    댓글