4, సెప్టెంబర్ 2022, ఆదివారం

TOMORROW CLASS IS ABOUT THE MULTITHREADING IN JAVA WAITING FOR THE EXPLANATION

 an example getCurrentThreadName(), main[thread groupname, priority, childthreadname]

ThreadGroup in Java

Java provides a convenient way to group multiple threads in a single object. In such a way, we can suspend, resume or interrupt a group of threads by a single method call.

Note: Now suspend(), resume() and stop() methods are deprecated.

Java thread group is implemented by java.lang.ThreadGroup class.

A ThreadGroup represents a set of threads. A thread group can also include the other thread group. The thread group creates a tree in which every thread group except the initial thread group has a parent.

1)ThreadGroup(String name)creates a thread group with given name.
2)ThreadGroup(ThreadGroup parent, String name)creates a thread group with a given parent group and name.

A thread is allowed to access information about its own thread group, but it cannot access the information about its thread group's parent thread group or any other thread groups.

  1. ThreadGroup tg1 = new ThreadGroup("Group A");   
  2. Thread t1 = new Thread(tg1,new MyRunnable(),"one");     
  3. Thread t2 = new Thread(tg1,new MyRunnable(),"two");     
  4. Thread t3 = new Thread(tg1,new MyRunnable(),"three");    

Now all 3 threads belong to one group. Here, tg1 is the thread group name, MyRunnable is the class that implements Runnable interface and "one", "two" and "three" are the thread names.

Now we can interrupt all threads by a single line of code only.

  1. Thread.currentThread().getThreadGroup().interrupt();  













The reality of multi-core hardware has made concurrent programs pervasive. Unfortunately, writing correct concurrent programs is difficult. Atomicity violation, which is caused by concurrent executions unexpectedly violating the atomicity of a certain code region, is one of the most common concurrency errors., atomicity violation bugs



HTM/STM.

Transactional memory originated in database theory, provides an alternative strategy for process synchronization.

A memory transaction is atomic is a sequence of memory read–write operations. The memory transaction is committed, if all operations in a transaction are completed. Otherwise, the operations must be aborted and rolled back. The ease of transactional memory can be obtained through features added to a programming language. Consider an example. Suppose we have a function update() that modifies shared data. Traditionally, this function would be written using mutex locks (or semaphores) such as the following −

void update (){
   acquire(); /* modify shared data */
   release();
}

However, using synchronization mechanisms such as mutex locks and semaphores involves many potential problems, including deadlock. Additionally, as the number of threads increases, traditional locking scales less well, because the level of contention among threads for lock ownership becomes very high. As an alternative to traditional locking methods, new features that take advantage of transactional memory can be added to a programming language. In our example, suppose we add the construct atomic{S}, which ensures that the operations in S execute as a transaction. This allows us to rewrite the update() function as follows −

void update (){
   atomic {
      /* modify shared data */
   }
}

The advantage of using such a mechanism rather than locks is that the transactional memory system—not the developer-is responsible guaranteeing atomicity. Additionally, because no locks are involved, deadlock is not possible. Furthermore, a transactional memory system can identify which statements in atomic blocks can be executed concurrently, such as concurrent read access to a shared variable. It is, of course, possible for a programmer to identify these situations and use reader-writer locks, but the task becomes increasingly difficult as the number of threads within an application grows. Transactional memory can be implemented in either software or hardware. Software transactional memory(STM), which implements transactional memory exclusively in software—no special hardware is needed. It works by inserting instrumentation code inside transaction blocks. The code is inserted by a compiler and manages each transaction by examining where statements may run concurrently and where specific low-level locking is required. Hardware transactional memory (HTM) uses hardware cache hierarchies and cache coherency protocols to manage and resolve conflicts involving shared data residing in separate processors’ caches. It requires no special code instrumentation and thus has less overhead than STM. However, HTM does require that existing cache hierarchies and cache coherency protocols be modified to support transactional memory. Transactional memory has existed for several years without widespread implementation. However, the growth of multicore systems and the associated emphasis on concurrent and parallel programming have prompted a significant amount of research in this area on the part of both academics and commercial software and hardware vendors.

కామెంట్‌లు లేవు:

కామెంట్‌ను పోస్ట్ చేయండి