No Managed Connection Available, this exception is thrown when the client (some DAO) tries to get a database connection out of a pool (managed by connection manager) and there is no more connection available in the pool to return back.  This can happen in any connection pool e.g. a pool created using the Apache DBCP or a Pool created using JBoss data source setup. Normally there is wait timeout that can be defined for the pool in ms, that is the amount of time the Pool Manager will wait for a connection to be available before it throws the exception. In the example stack trace, the value was configured as 5000ms i.e. 5sec.

Exception Stack Trace

exception.DAOException: org.jboss.util.NestedSQLException: No ManagedConnections available within configured blocking timeout ( 5000 [ms] ); - nested th
Throwable: (javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 5000 [ms] ))
        at sun.reflect.GeneratedMethodAccessor103.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:292)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:155)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:122)
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:144)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:174)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
        at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
        at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:432)
        at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74)

Solution

This exception can be due to a couple of reasons and could indicate performance issues, or might imply that the environment needs to scale to handle more load etc. and different steps need to be taken.

Understand the Behavior First of all the frequency of occurrence of this exception needs to be tracked to understand the behavior. It is time based? is it happening when a certain transactions are executed? What are the current settings for the pool? max and min size, the configured wait timeout etc. Basically, review all of these and try to build a pattern as to when this happen. A a matter of fact try to reproduce in development environment reivew it more closely as the production environment is normally restrictive.

Load Tuning So review the PoolSize Settings and see if the number of connections are adequate to handle the incoming load. May be the load (traffic/requests) have gone up after a product release or due to more customers etc. depending on the business use-case. Try to see if increasing the max pool helps here in reducing such occurrences or even to eliminating them. Also, see if it is acceptable that your request can wait for a longer time by increasing the wait timeout. Keep in mind just increasing the pool size may also increase DB resource utilization, so that has to be reviewed. Finally, review the deployment architecture, may its time to add one or more servers in the farm to be scale and handle more traffic.

Performance Tuning If do the above does not help is or it not a option the start looking towards performance tuning. It may that your sql request is taking time a longer time to execute which could be due to queries that might need tuning from a database perspective, so talk to your DBA to review your queries and tune accordingly. Also to help your sql queries you can use simple StopWatch like this to calculate time taken by long running queries, or use a P6Spy driver and get sql statements that are getting executed with time for review. Some times due to bad code, the connection object is held for a longer time that it actually needed or at times it is not even returned back to the pool causing pool starvation and eventually the pool is out of connections. So review code to see if such things are happening and fix it.