性能文章>几行代码轻松复现druid连接泄露的BUG之keepalive>

几行代码轻松复现druid连接泄露的BUG之keepalive原创

341112

背景介绍

一次druid数据库连接池连接泄露的排查分析介绍了连接泄露的分析排查过程,在几行代码轻松复现druid连接泄露的BUG之PhyTimeout介绍了当配置了phyTimeoutMillis参数情况下,连接泄露的场景,在几行代码轻松复现druid连接泄露的BUG之onFatalError介绍了当数据库操作出现某些异常情况下,连接泄露的场景,下面通过代码的方式来复现当配置了keepalive选项的情况下出现连接泄露的场景。

复现过程

连接泄露场景

模拟当配置了keepalive选项的情况下出现连接泄露的场景。

连接泄露模拟代码

import com.alibaba.druid.pool.DruidConnectionHolder;
import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.pool.DruidPooledConnection;

import java.lang.reflect.Field;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class DruidAbandonedCase4Keepalive {
    public static void main(String[] args) throws Exception {
        Map<Long,DruidConnectionHolder> holderMap = new HashMap<>();

        DruidDataSource dataSource = new DruidDataSource();
        dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
        dataSource.setUsername("root");
        dataSource.setPassword("123456");
        dataSource.setUrl("jdbc:mysql://127.0.0.1:3306/test?serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8&useSSL=false");

        dataSource.setMinIdle(2);
        dataSource.setKeepAlive(true);
        dataSource.setTimeBetweenEvictionRunsMillis(500);
        long minEvictableIdleTimeMillis = 500L;
        dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
        long keepAliveBetweenTimeMillis = 1000L;
        dataSource.setKeepAliveBetweenTimeMillis(keepAliveBetweenTimeMillis);
        dataSource.init();

        try{
            Field destroyConnectionThreadField = DruidDataSource.class.getDeclaredField("destroyConnectionThread");
            destroyConnectionThreadField.setAccessible(true);
            Thread destroyConnectionThread = (Thread)destroyConnectionThreadField.get(dataSource);
            destroyConnectionThread.interrupt();
            Thread.State state = destroyConnectionThread.getState();
            System.out.println("destroyConnectionThread state : " + state);
        }catch (Exception e){
            e.printStackTrace();
        }

        DruidPooledConnection connection1 = dataSource.getConnection();
        DruidConnectionHolder holder1 = connection1.getConnectionHolder();
        print(holder1);
        holderMap.put(holder1.getConnectionId(),holder1);

        DruidPooledConnection connection2 = dataSource.getConnection();
        DruidConnectionHolder holder2 = connection2.getConnectionHolder();
        print(holder2);
        holderMap.put(holder2.getConnectionId(),holder2);
        connection2.close();

        sleep(keepAliveBetweenTimeMillis - minEvictableIdleTimeMillis + 100L);

        connection1.close();
        sleep(minEvictableIdleTimeMillis - 100L);

        pooling(dataSource,holderMap);
        shrink(dataSource);
        pooling(dataSource,holderMap);

        sleep(100L);
        shrink(dataSource);
        pooling(dataSource,holderMap);

        System.out.println();
        print(holder1);
        print(holder2);
    }

    private static void sleep(long millis){
        try{
            Thread.sleep(millis);
        }catch (InterruptedException e){
            e.printStackTrace();
        }
    }

    private static void shrink(DruidDataSource dataSource){
        System.out.println();
        System.out.println("调用shrink");
        // 手动执行shrink
        dataSource.shrink(true);
    }
    private static void pooling(DruidDataSource dataSource,Map<Long,DruidConnectionHolder> holderMap) throws SQLException {
        System.out.println();
        System.out.println("连接池中连接情况,begin");
        List<Map<String, Object>> conns = dataSource.getPoolingConnectionInfo();
        for (Map<String, Object> conn : conns) {
            print(holderMap.get(conn.get("connectionId")));
        }
        System.out.println("连接池中连接情况,end");
    }
    private static void print(DruidConnectionHolder holder) throws SQLException {
        System.out.println(holder.getConnectionId() + /*" : " + holder +*/
                " idleMillis : " + (System.currentTimeMillis() - holder.getLastActiveTimeMillis()) + " isClosed:" + holder.getConnection().isClosed());
    }
}

运行以上程序,输出结果如下(druid 1.2.8):

destroyConnectionThread state : TIMED_WAITING
10001 idleMillis : 6 isClosed:false
10002 idleMillis : 1 isClosed:false

连接池中连接情况,begin
10002 idleMillis : 1015 isClosed:false
10001 idleMillis : 404 isClosed:false
连接池中连接情况,end

调用shrink

连接池中连接情况,begin
10001 idleMillis : 405 isClosed:false
10002 idleMillis : 1016 isClosed:false
连接池中连接情况,end

调用shrink

连接池中连接情况,begin
10001 idleMillis : 405 isClosed:false
10002 idleMillis : 1016 isClosed:false
连接池中连接情况,end

调用shrink

连接池中连接情况,begin
10002 idleMillis : 1133 isClosed:true
连接池中连接情况,end

10001 idleMillis : 522 isClosed:false
10002 idleMillis : 1133 isClosed:true

代码执行逻辑描述:

  1. 为了便于测试,中断Druid-ConnectionPool-Destroy-xx线程,以便手动调用shrink;
  2. 获取connectionId为1001的连接,打印连接信息;
  3. 获取connectionId为1002的连接,打印连接信息;
  4. 调用close,归还connectionId为1002的连接
  5. sleep (keepAliveBetweenTimeMillis - minEvictableIdleTimeMillis + 100L)时长
  6. 调用close,归还connectionId为1001的连接
  7. sleep (minEvictableIdleTimeMillis - 100L)时长(两个sleep是为了让1002 idleMillis大于keepAliveBetweenTimeMillis,让1001小于minEvictableIdleTimeMillis)
  8. 打印连接池中连接信息
  9. 执行shrink
  10. 打印连接池中连接信息
  11. sleep 100ms
  12. 执行shrink
  13. 打印连接池中连接信息
  14. 打印1001和1002连接信息

期望执行结果:

  • connectionId为1001和1002的连接都应该被连接池管理,且连接状态都应该是未关闭状态

实际执行结果:

  • connectionId为1001的连接已从连接池中删除,且连接状态是未关闭状态,泄露的连接【不符合期望】;
  • connectionId为1002的连接虽然在连接池中,但是连接状态是关闭状态【不符合期望】。

测试看,keepalive选项是从druid 1.1.16引入的,druid 1.1.16-1.1.24,1.2.0-1.2.17都存在连接泄露问题,druid 1.2.18-1.2.20不存在连接泄露问题。

点赞收藏
大禹的足迹

在阿里搬了几年砖的大龄码农,头条号:大禹的足迹

请先登录,查看1条精彩评论吧
快去登录吧,你将获得
  • 浏览更多精彩评论
  • 和开发者讨论交流,共同进步

为你推荐

JDBC PreparedStatement 字段值为null导致TBase带宽飙升的案例分析

JDBC PreparedStatement 字段值为null导致TBase带宽飙升的案例分析

随机一门技术分享之Netty

随机一门技术分享之Netty

MappedByteBuffer VS FileChannel:从内核层面对比两者的性能差异

MappedByteBuffer VS FileChannel:从内核层面对比两者的性能差异

2
1