Sunday, January 09, 2011

Java Concurrency In Practice


Although with new cutting edge devices, computing performance is increasing continuously, still high performance and low latency programming plays an important role to utilise these devices. Unlike other programming languages such as C/C++, Java provides compact, portable, optimum, intuitive, built-in components to take full advantage resources to provide high performance computing. In this text, i wont claim that Java provides better performance amongst other available languages (there are enough blogs about these benchmarks). I will focus on how to use Java to develop high performance application. Similar to some of my other articles, i will use a book, Java Concurrency in Practice, to explain concurrency in java.

Without a doubt, Java Concurrency in Practice is an unique book in this field. It is a must reading for every Java concurrency developer. The authors have first hand experience in developing concurrency and threading package at Sun and have enormous experience in practice.
Concurrency is not a new subject in software development. Similar to other engineering problems, utilising available resources with fairness and convenience is always being one of the main objective in software development. To utilise finite and costly resources with fairness and convenience, java provides a highly optimised, compact concurrency framework which includes threads, thread execution services, memory sharing facilities, concurrent or synchronized data structures, explicit and implicit locks , and atomic variable.

Threads

To take full advantage of available resources (CPU, memory, disk, network), multiple threads can be started in an application. A thread is similar to process but its life cycle is bounded with main thread (process) and managed by main threads. Each thread has its own stack for local variable and execution path but uses shared heap memory for global objects.

Threads provide parallel processing in application, simplicity of modelling, handling of asynchronous events, and more responsive feedback. Parallel processing is obtained by exploiting available (or by simulating) multiple processors. Dividing whole problem space or task into sub problem space or sub tasks and then assigning each part to a thread, naturally, is a simple modelling to solve complex problem. Since many resources are asynchronous, such as I/O resources, non-blocking threads would also provide high utilization of resources. One of other advantages of thread is responsiveness. By assigning specific tasks (UI interaction) to specific threads (GUI thread), a better response and throughput would be obtained.

Java uses different platform specific frameworks to implement threads (for example, PThreads for Linux) and therefore some limitation is platform specific. For example, there is a theoretical maximum number of threads in each platform. But in practice for high performance computing, number of thread is strictly correlated with number of available processor and idle time between subtasks executions.
Threads are very common in software development. Even an application does not use explicitly thread itself, libraries or frameworks used in this application use thread.

Risk of Threads

As explained above threads provides many benefits. But these benefits comes with cost of safety, liveness and performance.

Since many threads would access mutable shared objects and resources at same time, they would cause inconsistent object and resource states. Therefore, when accessing shared objects, a thread safe mechanism must be deployed to protect consistency of mutable object's states.

In order to provide thread safety, several mechanism, such as lock or synchronization are used. While these mechanism provides safety, they would also block or limit accessing resources therefore causes liveness issues. If threads are not managed very well, it would cause extra overhead, performance issues during context switching between threads and synchronization.

Thread Safety

If a shared and mutable object is shared by many thread at same time, state of this object can become inconsistent if proper synchronization is not applied over shared object, as many thread can access and modify the object at same time. In order to protect a consistent and deterministic state of object , one of the following has to be done:

  • Don't share the object across threads
  • Make the object is immutable
  • Use synchronization mechanisms to control access and modification of shared object by multiple objects.

In the book, thread safe is defined as follows:

"A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code."

Good object oriented techniques such as encapsulation, immutability would improve thread safety. With encapsulation inner states of objects and synchronized blocks would be hidden from clients.

Thread safety can be obtained by implementing good synchronization mechanism with:

Implicit locking: Accessing a resource is blocked with implicit lock of an object. In java, each object holds an implicit lock. synchronized keyword is used to acquire implicit lock of an object. And with volatile keyword, memory visibility can be guaranteed.

void synchronized myMethod() {
// Access or modify shared state guarded by "this"
}

void static synchronized myMethod() {
// Access or modify shared state guarded by class itself
}

void synchronized (lock) {
// Access or modify shared state guarded by lock object
}

Explicit locking: In addition to implicit locks, java provides more flexible locking classes for various situations. Some of these classes are: ReentrantLock, Semaphore, ReentrantReadWriteLock, CountDownLatch, SynchronousQueue, and FutureTask

Atomic variables: Some operations, such as increasing a shared counter variable in a multithreaded environment must be atomic. By atomic, we mean, access and modify operation has to be done automatic. Java uses Compare and Swap (CAS) mechanism with underlying platform specific functions to implement atomicity. Some of these atomic classes are: AtomicInteger, AtomicLong, AtomicBoolean, AtomicReference

Built-in concurrent or synchronized data structures: Java provides various built-in thread safe data structures such as Vector, Hashtable, java.util.concurrent.*, ...

When a thread enters a synchronized or locked block, it must hold or acquire lock to enter the synchronized or locked block. Otherwise, threads are blocked on the lock. In Java locks are reentrant, in another words, if a thread holds a lock in a synchronized block and if reenters the block with same lock, the thread wont be blocked.

Sharing Objects

If mutable objects are not shared between threads, there wont be much headache in developing multi thread applications. In practice, there are not many multi-threaded applications which does not share mutable objects across threads.

When a mutable object is shared, it must be guarded to restrict modification at same time by multiple threads. Once modification is done, latest state of the shared object must be visible to other threads immediately. Visibility of a shared object is guaranteed with locks (implicit and explicit locks such as synchronized) and volatility keyword. In other words, for example, synchronized keywords does not only prevent multiple thread to access an object, but it also publish state of the object immediately to other threads once synchronized block is completed.

Without visibility and publication feature of synchronization, object can be in stale state where data may not be up to date for other threads. Nonetheless, when a thread reads a variable, it reads a value which is stored there by a thread (maybe be not latest data but it is not random data). This safety guarantee is called out-of-thin-air safety. This is guaranteed for all variable apart from 64 bit variable such as double and long. This variable has to be declared as volatile otherwise, JVM would fetch these variables in two 32 bit operations.

An object must be encapsulated well to not expose inner state. Otherwise, exposed or escaped inner state of an object would cause inconsistent state and invisibility issue across threads.

The most useful policies for using and sharing objects in a concurrent program are: Thread-confinement, immutability, (shared read-only),using built-in thread-safe objects, guarded by specific lock.

Thread Confinement and ThreadLocal

As mentioned before, If an object is not shared by multiple thread and accessed only by one thread, thread safety will be guaranteed automatically. This technique is called thread confinement. Thread confinement is extensively used in some frameworks such as Swing and JDBC.

Thread confinement can be implemented in various ways, two of which are very popular: Stack confinement and ThreadLocal. In the first method, shared objects are locally copied/cloned (therefore an instance of them is created in stack) before they are modified or accessed by a thread. But more formal and built-in confinement method is ThreadLocal which holds a copy of an object for a thread. In other words, it is a pair-value, container, which provides initialValue, set and get methods.

Immutability

Immutable objects can be used safely by any thread without additional synchronization, even when synchronization is not used to publish them. An object can be immutable either:
  • All its fields are final
  • Its state can not be changed after construction

.... To be Continued