Friday 16 September 2016

Java 5 - ScheduledExecutorService - new way of writing java scheduler


Java Timer/TimerTask was used for scheduling tasks which requires to be executed at certain interval, frequency etc. Java 5 introduced the concept of thread pool for limiting number of threads executing at a particular point of time. Creating a thread consumes a significant amount of memory and hence limiting the number of threads via thread pool will give better performance and saves memory.

Java 5 documentation recommends to use ScheduledExecutorService for creating scheduler jobs. This is further simpler implementation and doesn't require subclassing as was required in case of a TimerTask.

Example scheduler using ScheduledExecutorService is as follows:

package com.prasune.test.concurrent;

import java.util.Date;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;

public class SchedulerService {
   
    public static void main(String[] args) {
       
        ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(5);
       
        Runnable command = new Runnable() {           
            @Override
            public void run() {
                System.out.println("Executing the scheduler " + new Date());
            }
        };
       
        // Creates and executes a periodic action that becomes enabled first
        // after the given initial delay, and subsequently with the given period
        final ScheduledFuture timerHandle =
                scheduler.scheduleAtFixedRate(command, 5, 60, TimeUnit.SECONDS);
       
        // Stop command execution after one hour
        Runnable cancelTimerCommand = new Runnable() {
            public void run() {
                timerHandle.cancel(true);
                }
        };
       
        scheduler.schedule(cancelTimerCommand, 60 * 60, TimeUnit.SECONDS);
    }
}



Java 5 - ReadWriteLock for better performance - ReentrantReadWriteLock and StampedLock


ReadWriteLock was introduced in Java 5 in the form of ReentrantReadWriteLock. A readWriteLock improves performance significantly in a multi-threaded environment when a resource is read more often than it is written.

The concept is simple, multiple threads can do read simultaneously and does not need to be blocked on each other, but if a write operation is going on all the subsequent write/read locks need to wait. A write lock will also wait for already acquired read locks before it gains the exclusive lock on the resource.

A typical use case of ReadWriteLock comes while maintaining a cache that needs to be updated when a new value gets added to the system.

An example cache managed by ReadWriteLock is as follows:


package com.prasune.test.concurrent;

import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class CachedDataManager {

    private static Map<String, Object> cachedData = new HashMap<>();

    /**
     * Read/write lock to manage cache read/write operations.
     */
    private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();

    /**
     * Singleton instance
     */
    private static CachedDataManager instance = new CachedDataManager();

    /**
     * private constructor to make class singleton
     */
    private CachedDataManager() {

    }
   
    public CachedDataManager getInstance() {
        return instance;
    }

    /**
     * Fetch cache data using key
     * @param key
     * @return
     */
    public Object getCachedData(String key) {
        Object data = null;
        try {
            readWriteLock.readLock().lock();
            data = cachedData.get(key);           
        } finally {
            readWriteLock.readLock().unlock();
        }
        return data;
    }
   
    /**
     * Update cache  by adding new entry
     * @param key
     * @param data
     */
    public void addToCache(String key, Object data) {
        try {
            readWriteLock.writeLock().lock();
            cachedData.put(key, data);           
        } finally {
            readWriteLock.writeLock().unlock();
        }
    }
}




ReentrantReadWriteLock performs better than the traditional lock mechanism, but it is not fast enough and is too slow at times. So, Java 8 introduced a new ReadWriteLock in the form of StampedLock which uses new set of algorithms and memory management introduced in Java 8.

StampedLock uses a stamp to identify the lock which helps in better performance in managing the read/write lock.

Let us modify our program to use StampedLock:


package com.prasune.test.concurrent;

import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.StampedLock;

public class CachedDataManager {

    private static Map<String, Object> cachedData = new HashMap<>();

    /**
     * Read/write lock to manage cache read/write operations.
     */
    private final StampedLock readWriteLock = new StampedLock();

    /**
     * Singleton instance
     */
    private static CachedDataManager instance = new CachedDataManager();

    /**
     * private constructor to make class singleton
     */
    private CachedDataManager() {

    }
   
    public CachedDataManager getInstance() {
        return instance;
    }

    /**
     * Fetch cache data using key
     * @param key
     * @return
     */
    public Object getCachedData(String key) {
        Object data = null;
        Long stamp = 0L;
        try {
            stamp = readWriteLock.readLock();
            data = cachedData.get(key);           
        } finally {
            readWriteLock.unlockRead(stamp);
        }
        return data;
    }
   
    /**
     * Update cache  by adding new entry
     * @param key
     * @param data
     */
    public void addToCache(String key, Object data) {
        Long stamp = 0L;
        try {
            stamp = readWriteLock.writeLock();
            cachedData.put(key, data);           
        } finally {
            readWriteLock.unlockWrite(stamp);
        }
    }
}





Java 8 - Producer/Consumer threads using executor framework


Creating a thread consumes a significant amount of memory. In an application where there are lot of client programs, creating a thread per client will not scale. So, Java 5 came up with an executor framework to provide a thread pool for execution limiting the number of threads serving client request at any point of time. This helps in performance and in reducing the memory requirement.

Java 5 also provides blocking queue implementations and we no longer requires to control producer/consumer applications using wait/notify. This is automatically taken care by BlockingQueue implementations.

An example producer/consumer making use of a blocking queue implementation and executor framework is as follows:


package com.prasune.coding.thread;

import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.logging.Level;
import java.util.logging.Logger;

public class TestProducerConsumer {

    private static final int NUM_OF_MSGS = 20;
    private static final BlockingQueue<String> queue 
                                              = new ArrayBlockingQueue<String>(5);
    private static ExecutorService producerPool = Executors.newFixedThreadPool(3);
    private static ExecutorService consumerPool = Executors.newFixedThreadPool(1);

    private static Logger logger =                                                                               Logger.getLogger(TestProducerConsumer.class.getName());

    public static void main(String[] args) {
        Runnable producerTask = () -> {
            try {
                queue.put("test Message");
                System.out.println(Thread.currentThread().getName() 
                                   + " put message queue.size() " + queue.size());
            } catch (InterruptedException e) {
                logger.log(Level.SEVERE, e.getMessage(), e);
            }
        };
        Runnable consumerTask = () -> {
            try {
                System.out.println(Thread.currentThread().getName() 
                                   + " received msg " + queue.take());

            } catch (InterruptedException e) {
                logger.log(Level.SEVERE, e.getMessage(), e);
            }
        };
        try {
            for (int i = 0; i < NUM_OF_MSGS; i++) {
                producerPool.submit(producerTask);
            }
            for (int i = 0; i < NUM_OF_MSGS; i++) {
                consumerPool.submit(consumerTask);
            }
        } finally {
            if (producerPool != null) {
                producerPool.shutdown();
            }
            if (consumerPool != null) {
                consumerPool.shutdown();
            }
        }
    }
}



Thursday 15 September 2016

How to detect locks in SQL sessions

To find out v$session holding lock:
SQL> select sid, serial#, username, command, lockwait, osuser from v$session where lockwait is not null;

To kill a locked session, first need to find sid, serial and use
SQL> alter system kill session 'sid, serial#';
*** you need have dba priviledge to kill sessions

 To find which SQL has lock wait:
SQL> select sql_text from v$sqltext where (address,hash_value) in (select sql_address,sql_hash_value from v$session where lockwait is not null) order by address, hash_value, piece;

If #3 is a parameterized SQL, use V$SQL_BIND_CAPTURE to display information on bind variables used by SQL cursors. Each row in the view contains information for one bind variable defined in a cursor.
SQL> select * from V$SQL_BIND_CAPTURE where (address,hash_value) in (select sql_address,sql_hash_value from v$session where lockwait is not null) order by address, hash_value;

SQL to check deadlocks:
SQL> select    c.owner,    c.object_name,    c.object_type,    b.sid,    b.serial#,    b.status,    b.osuser,    b.machine from    v$locked_object a ,    v$session b,    dba_objects c where    b.sid = a.session_id and    a.object_id = c.object_id;

Oracle - SQL diagnostic reports

AWR (Automatic Workload Repository) report: Oracle through snapshots collects, process and maintains performance statistics that can be accessed via AWR reports.

Generating AWR report:
SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.sql

Also see: these related AWR reports under the same location:
awrrpt.sql
Displays various statistics for a range of snapshots Ids.
awrrpti.sql
Displays statistics for a range of snapshot Ids on a specified database and instance.
awrsqrpt.sql
Displays statistics of a particular SQL statement for a range of snapshot Ids. Run this report to inspect or debug the performance of a particular SQL statement.
awrsqrpi.sql
Displays statistics of a particular SQL statement for a range of snapshot Ids on a specified SQL.
awrddrpt.sql
Compares detailed performance attributes and configuration settings between two selected time periods.
awrddrpi.sql
Compares detailed performance attributes and configuration settings between two selected time periods on a specific database and instance.

ASH (Active Session History) report: displays top session activities during AWR snapshots.

Generating ASH report: SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sql
SQL> @$ORACLE_HOME/rdbms/admin/ashrpti.sql

ADDM (Automatic Database Diagnostic Monitor) report: shows most significant performance issues between AWR snapshots.

ADDM reports include :
Top SQL Activities
CPU bottlenecks
Undersized memory allocations
Excessive parsing
I/O usage
Concurrency issues
Object contention

Generating ADDM report:
SQL> @$ORACLE_HOME/rdbms/admin/addmrpt.sql


Generating SQL trace

TURN on SQL tracing:

ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';

TURN off SQL tracing:

ALTER SESSION SET EVENTS '10046 trace name context off';


Location of trace dump file:

Trace output is written to the database's UDUMP directory.

UDUMP is the database's USER DUMP DIRECTORY, you can find the same by using:
SQL> SHOW PARAMETERS user_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
user_dump_dest                       string      /xx/xx/xx/xx/udump

To change this value:
SQL> ALTER SYSTEM SET user_dump_dest = '/xx/xx/xx/xx/udump' SCOPE=both;

System altered.


The default name for a trace files is INSTANCE_PID_ora_TRACEID.trc where:
INSTANCE is the name of the Oracle instance,
PID is the operating system process ID (select SPID from V$PROCESS); and
TRACEID is a character string of your choosing.

 select vp.spid
 from v$session vs,
      v$process vp
 where sid IN (select distinct sid from v$mystat )
   and vs.paddr = vp.addr;