Threads best practices (Mutex, Starvation / Fare locking in Java, Thread Pool)
Mutex: Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections e.g. producer and consumer accessing queues A good practive of use of mutex suggest acquiring a lock on a different object then the object in critical section itself. e.g. if the queue need to be accessed mutually exclusively one at a time by the producer and consumer thread, Queue itself should not be locked, however define another object for the purpose; the synchronized block would be applied across this object. The basic advantage of this approch is to define multiple mutex for a single monitor object, one can define two mutex one for all the producer threads and other for consumner thread, since both the operation can be performed in parallel but no two consumer should work in parrallel.
Starvation: Starvation is a situation when a low priority thread never gets its chance to execute CPU cycle, critical section or synchronized object; The synchronized block well handle the automicity, without race conditions or corrupting data i.e. only one thread at a time can execute code protected by a given monitor object (lock), allowing you to prevent multiple threads from colliding with each other when updating shared state; but It has few limitations: • • •
it is not possible to interrupt a thread that is waiting to acquire a lock, nor is it possible to poll for a lock or attempt to acquire a lock without being willing to wait forever for it
Synchronized, Wait/notifyAll doesn't maintain a queue of threads, All the threads gets notified and race for the critical section. when there are bombardment of threads waiting for critical section there are chances that one of a thread may never get chance to hold on synchronized block causing starvation. Starwation can be avoided by giving fair chance to all the waiting threads, commonly known as fair locking. •
JDK 1.5 introduced a locking framework to overcome all these short comming. Here instead of using Synchronized block one uses "java.util.concurrent.locks.ReentrantLock" to place the critical section on atomic access; e.g.
Lock lock = new ReentrantLock(); lock.lock(); /* or lock.trylock(20, TimeUnit.SECONDS); */ try { // update object state } finally { lock.unlock(); }
•
For JDK 1.4 users, there are two choices: a) Use the JDK 1.4 backport of "java.util.concurrent.locks.ReentrantLock" developed by "edu.emory.mathcs.backport" package known as "edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock" b) Impliment waiting thread queue: package ub.utils.locker; public class QueueObject { private boolean isNotified = false; public synchronized void doWait() throws InterruptedException { while(!isNotified){ this.wait(); } this.isNotified = false; } public synchronized void doNotify() { this.isNotified = true; this.notify(); } public boolean equals(Object o) { return this == o; } } package ub.utils.locker; import java.util.ArrayList; import java.util.List; public class Locker { private boolean private Thread private List
isLocked = false; lockingThread = null; waitingThreads = new ArrayList();
public void lock() throws InterruptedException { QueueObject queueObject = new QueueObject(); boolean isLockedForThisThread = true; synchronized(this) { waitingThreads.add(queueObject); } while(isLockedForThisThread) { synchronized(this) { isLockedForThisThread = isLocked || !((QueueObject) waitingThreads.get(0)).equals(queueObject); if(!isLockedForThisThread){ isLocked = true;
waitingThreads.remove(queueObject); lockingThread = Thread.currentThread(); return; } } try { queueObject.doWait(); } catch(InterruptedException e) { synchronized(this) { waitingThreads.remove(queueObject); } throw e; } } } public synchronized void unlock() { if(this.lockingThread != Thread.currentThread()) { throw new IllegalMonitorStateException( "Calling thread has not locked this lock"); } isLocked = false; lockingThread = null; if(waitingThreads.size() > 0) { ((QueueObject) waitingThreads.get(0)).doNotify(); } } }
Thread Pool / Thread Pool Executor: Use thread pool pattern, where a number of threads are created to perform a number of tasks, which are usually organized in a queue. As soon as a thread completes its task, it will request the next task from the queue until all tasks have been completed. The thread can then terminate, or sleep until there are new tasks available. Advantage here are : if the number of tasks is very large, then creating a thread for each one may be impractical Using a thread pool over creating a new thread for each task, thread creation and destruction overhead is negated You have flexibility to tune with the parameter "number of threads" in order to achive the best performance More over one can think of having the number of threads changing dynamically based on the number of waiting tasks With the help of beforeExecute and afterExecute, one can easily inject cross concerns, e.g time logging, initialization of thread params ... e.g. Thread Pool Extended with Logging and Timing: use of beforeExecute and afterExecute:
public class TimingThreadPool extends ThreadPoolExecutor { private final ThreadLocal startTime = new ThreadLocal(); private final Logger log = Logger.getLogger("TimingThreadPool"); private final AtomicLong numTasks = new AtomicLong(); private final AtomicLong totalTime = new AtomicLong(); protected void beforeExecute(Thread t, Runnable r) { super.beforeExecute(t, r); log.fine(String.format("Thread %s: start %s", t, r)); startTime.set(System.nanoTime()); }
protected void afterExecute(Runnable r, Throwable t) { try { long endTime = System.nanoTime(); long taskTime = endTime - startTime.get(); numTasks.incrementAndGet(); totalTime.addAndGet(taskTime); log.fine(String.format("Thread %s: end %s, time=%dns", r, taskTime)); } finally { super.afterExecute(r, t); } }
t,
protected void terminated() { try { log.info(String.format("Terminated: avg time=%dns", totalTime.get() / numTasks.get())); } finally { super.terminated(); } } }
Avoid deadlocks:
1. ALL the below four conditions MUST hold for deadlock to occur, so to avoid it atleast one of the condition must not occur at any given point of time.
Mutual exclusion Hold-and-wait No preemption Circular wait
Resource can not be shared Requests are delayed until resource is released Thread holds one resource while waits for another Resources are released voluntarily after completion Circular dependencies exist in “waits-for” or “resourceallocation” graphs