在这篇文章中我们将讨论如何利用 Apache Spark 来提升 MySQL 的查询性能。

在我的前一篇文章 Apache Spark with MySQL 中介绍了如何利用 Apache Spark 实现数据分析以及如何对大量存放于文本文件的数据进行转换和分析。瓦迪姆还做了一个基准测试用来比较 MySQL 和 Spark with Parquet 柱状格式 (使用空中交通性能数据) 二者的性能。 这个测试非常棒,但如果我们不希望将数据从 MySQL 移到其他的存储系统中,而是继续在已有的 MySQL 服务器上执行查询的话,Apache Spark 一样可以帮到我们!

在已有的 MySQL 服务器之上使用 Apache Spark (无需将数据导出到 Spark 或者 Hadoop 平台上),这样至少可以提升 10 倍的查询性能。使用多个 MySQL 服务器(复制或者 Percona XtraDB Cluster)可以让我们在某些查询上得到额外的性能提升。你也可以使用 Spark 的缓存功能来缓存整个 MySQL 查询结果表。

思路很简单:Spark 可以通过 JDBC 读取 MySQL 上的数据,也可以执行 SQL 查询,因此我们可以直接连接到 MySQL 并执行查询。那么为什么速度会快呢?对一些需要运行很长时间的查询(如报表或者BI),由于 Spark 是一个大规模并行系统,因此查询会非常的快。MySQL 只能为每一个查询分配一个 CPU 核来处理,而 Spark 可以使用所有集群节点的所有核。在下面的例子中,我们会在 Spark 中执行 MySQL 查询,这个查询速度比直接在 MySQL 上执行速度要快 5 到 10 倍。

另外,Spark 可以增加“集群”级别的并行机制,在使用 MySQL 复制或者 Percona XtraDB Cluster 的情况下,Spark 可以把查询变成一组更小的查询(有点像使用了分区表时可以在每个分区都执行一个查询),然后在多个 Percona XtraDB Cluster 节点的多个从服务器上并行的执行这些小查询。最后它会使用 map/reduce 方式将每个节点返回的结果聚合在一起行程完整的结果。

这篇文章跟我之前文章 “ Airlines On-Time Performance ” 所使用的数据库是相同的。瓦迪姆创建了一些脚本可以方便的下载这些数据并上传到 MySQL 数据库。脚本的下载地址请看 这里 。同时我们这次使用的是 2016年7月26日发布的 Apache Spark 2.0

安装 Apache Spark

使用独立模式启动 Apache Spark 是很简单的,如下几步即可:

  • 下载 Apache Spark 2.0 并解压到某目录
  • 启动 master.
  • 启动 slave (worker) 并连接到 master
  • 启动应用 (spark-shell 或者 spark-sql).
  • root@thor:~/spark# ./sbin/start-master.sh
    less ../logs/spark-root-org.apache.spark.deploy.master.Master-1-thor.out
    15/08/25 11:21:21 INFO Master: Starting Spark master at spark://thor:7077
    15/08/25 11:21:21 INFO Utils: Successfully started service 'MasterUI' on port 8080.
    15/08/25 11:21:21 INFO MasterWebUI: Started MasterWebUI at http://10.60.23.188:8080
    root@thor:~/spark# ./sbin/start-slave.sh spark://thor:7077

    为了连接到 Spark ,我们可以使用 spark-shell (Scala)、pyspark (Python) 或者  spark-sql。spark-sql 和 MySQL 命令行类似,因此这是最简单的选择(你甚至可以用 show tables 命令)。我同时还需要在交互模式下使用 Scala ,因此我选择的是 spark-shell 。在下面所有的例子中,我都是在 MySQL 和 Spark 上使用相同的 SQL 查询,所以其实没多大的不同。

    为了让 Spark 能用上 MySQL 服务器,我们需要驱动程序 Connector/J for MySQL . 下载这个压缩文件解压后拷贝 mysql-connector-java-5.1.39-bin.jar 到 spark 目录,然后在 conf/spark-defaults.conf 中添加类路径,如下:

    spark.driver.extraClassPath = /usr/local/spark/mysql-connector-java-5.1.39-bin.jar
    spark.executor.extraClassPath = /usr/local/spark/mysql-connector-java-5.1.39-bin.jar

    利用 Apache Spark 运行 MySQL 查询

    在这个测试中我们使用的一台拥有 12 核(老的 Intel(R) Xeon(R) CPU L5639 @ 2.13GHz 处理器) 以及 48G 内存,带有 SSD 磁盘的物理服务器。 在这台机器上我安装了 MySQL 并启动了 Spark 主节点和从节点。

    现在我们可以在 Spark 中运行 MySQL 查询了。首先,从 Spark 目录中启动 Shell (在我这里是 /usr/local/spark ):

    $ ./bin/spark-shell --driver-memory 4G --master spark://server1:7077

    然后我们将连接到 MySQL 服务器并注册临时视图:

    val jdbcDF = spark.read.format("jdbc").options(
      Map("url" ->  "jdbc:mysql://localhost:3306/ontime?user=root&password=",
      "dbtable" -> "ontime.ontime_part",
      "fetchSize" -> "10000",
      "partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2016", "numPartitions" -> "28"
      )).load()
    jdbcDF.createOrReplaceTempView("ontime")

    这样我们就为 Spark 创建了一个“数据源”(换句话说就是相当于 Spark 建立了到 MySQL 的连接)。Spark 表名 “ontime” 对应连接到 MySQL 的ontime.ontime_part 表,现在可以在 Spark 中运行 SQL 了,它们是按顺序被一一解析并转换成 MySQL 查询的。

    partitionColumn ” 在这里非常重要,它告诉 Spark 并行的执行多个查询,每个分区分配一个查询执行。

    现在我们可以运行查询:

    val sqlDF = sql("select min(year), max(year) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and (origin = 'RDU' or dest = 'RDU') GROUP by carrier HAVING cnt > 100000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10")
    sqlDF.show()

    MySQL 查询示例

    让我们暂时回到 MySQL 来看看这个查询例子,我选出了如下的查询语句 (来自我以前的文章):

    select min(year), max(year) as max_year, Carrier, count(*) as cnt,
    sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed,
    round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate
    FROM ontime
    WHERE
    DayOfWeek not in (6,7)
    and OriginState not in ('AK', 'HI', 'PR', 'VI')
    and DestState not in ('AK', 'HI', 'PR', 'VI')
    GROUP by carrier HAVING cnt > 100000 and max_year > '1990'
    ORDER by rate DESC, cnt desc
    LIMIT  10

    这个查询用来查找出每个航空公司航班延误的架数。此外该查询还将很智能的计算准点率,考虑到航班数量(我们不希望小航空公司跟大航空公司比较,同时一些老的关闭的航空公司也不在计算范围之内)。

    我选择这个查询主要的原因是,这在 MySQL 很难再优化了,所有的这些 WHERE 条件智能过滤掉约 70% 的记录行。我做了一个基本的计算:

    mysql> select count(*) FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI');
    +-----------+
    | count(*)  |
    +-----------+
    | 108776741 |
    +-----------+
    mysql> select count(*) FROM ontime;
    +-----------+
    | count(*)  |
    +-----------+
    | 152657276 |
    +-----------+
    mysql> select round((108776741/152657276)*100, 2);
    +-------------------------------------+
    | round((108776741/152657276)*100, 2) |
    +-------------------------------------+
    |                               71.26 |
    +-------------------------------------+

    表结构如下:

    CREATE TABLE `ontime_part` (
      `YearD` int(11) NOT NULL,
      `Quarter` tinyint(4) DEFAULT NULL,
      `MonthD` tinyint(4) DEFAULT NULL,
      `DayofMonth` tinyint(4) DEFAULT NULL,
      `DayOfWeek` tinyint(4) DEFAULT NULL,
      `FlightDate` date DEFAULT NULL,
      `UniqueCarrier` char(7) DEFAULT NULL,
      `AirlineID` int(11) DEFAULT NULL,
      `Carrier` char(2) DEFAULT NULL,
      `TailNum` varchar(50) DEFAULT NULL,
      `id` int(11) NOT NULL AUTO_INCREMENT,
      PRIMARY KEY (`id`,`YearD`),
      KEY `covered` (`DayOfWeek`,`OriginState`,`DestState`,`Carrier`,`YearD`,`ArrDelayMinutes`)
    ) ENGINE=InnoDB AUTO_INCREMENT=162668935 DEFAULT CHARSET=latin1
    /*!50100 PARTITION BY RANGE (YearD)
    (PARTITION p1987 VALUES LESS THAN (1988) ENGINE = InnoDB,
     PARTITION p1988 VALUES LESS THAN (1989) ENGINE = InnoDB,
     PARTITION p1989 VALUES LESS THAN (1990) ENGINE = InnoDB,
     PARTITION p1990 VALUES LESS THAN (1991) ENGINE = InnoDB,
     PARTITION p1991 VALUES LESS THAN (1992) ENGINE = InnoDB,
     PARTITION p1992 VALUES LESS THAN (1993) ENGINE = InnoDB,
     PARTITION p1993 VALUES LESS THAN (1994) ENGINE = InnoDB,
     PARTITION p1994 VALUES LESS THAN (1995) ENGINE = InnoDB,
     PARTITION p1995 VALUES LESS THAN (1996) ENGINE = InnoDB,
     PARTITION p1996 VALUES LESS THAN (1997) ENGINE = InnoDB,
     PARTITION p1997 VALUES LESS THAN (1998) ENGINE = InnoDB,
     PARTITION p1998 VALUES LESS THAN (1999) ENGINE = InnoDB,
     PARTITION p1999 VALUES LESS THAN (2000) ENGINE = InnoDB,
     PARTITION p2000 VALUES LESS THAN (2001) ENGINE = InnoDB,
     PARTITION p2001 VALUES LESS THAN (2002) ENGINE = InnoDB,
     PARTITION p2002 VALUES LESS THAN (2003) ENGINE = InnoDB,
     PARTITION p2003 VALUES LESS THAN (2004) ENGINE = InnoDB,
     PARTITION p2004 VALUES LESS THAN (2005) ENGINE = InnoDB,
     PARTITION p2005 VALUES LESS THAN (2006) ENGINE = InnoDB,
     PARTITION p2006 VALUES LESS THAN (2007) ENGINE = InnoDB,
     PARTITION p2007 VALUES LESS THAN (2008) ENGINE = InnoDB,
     PARTITION p2008 VALUES LESS THAN (2009) ENGINE = InnoDB,
     PARTITION p2009 VALUES LESS THAN (2010) ENGINE = InnoDB,
     PARTITION p2010 VALUES LESS THAN (2011) ENGINE = InnoDB,
     PARTITION p2011 VALUES LESS THAN (2012) ENGINE = InnoDB,
     PARTITION p2012 VALUES LESS THAN (2013) ENGINE = InnoDB,
     PARTITION p2013 VALUES LESS THAN (2014) ENGINE = InnoDB,
     PARTITION p2014 VALUES LESS THAN (2015) ENGINE = InnoDB,
     PARTITION p2015 VALUES LESS THAN (2016) ENGINE = InnoDB,
     PARTITION p_new VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */

    就算有一个“覆盖”索引,MySQL 也将扫描约 ~70M-100M 行的数据并创建一个临时表:

    mysql>  explain select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: ontime_part
             type: range
    possible_keys: covered
              key: covered
          key_len: 2
              ref: NULL
             rows: 70483364
            Extra: Using where; Using index; Using temporary; Using filesort
    1 row in set (0.00 sec)

    下面是 MySQL 查询的响应时间:

    mysql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10;
    +------------+----------+---------+----------+-----------------+------+
    | min(yearD) | max_year | Carrier | cnt      | flights_delayed | rate |
    +------------+----------+---------+----------+-----------------+------+
    |       2003 |     2013 | EV      |  2962008 |          464264 | 0.16 |
    |       2003 |     2013 | B6      |  1237400 |          187863 | 0.15 |
    |       2006 |     2011 | XE      |  1615266 |          230977 | 0.14 |
    |       2003 |     2005 | DH      |   501056 |           69833 | 0.14 |
    |       2001 |     2013 | MQ      |  4518106 |          605698 | 0.13 |
    |       2003 |     2013 | FL      |  1692887 |          212069 | 0.13 |
    |       2004 |     2010 | OH      |  1307404 |          175258 | 0.13 |
    |       2006 |     2013 | YV      |  1121025 |          143597 | 0.13 |
    |       2003 |     2006 | RU      |  1007248 |          126733 | 0.13 |
    |       1988 |     2013 | UA      | 10717383 |         1327196 | 0.12 |
    +------------+----------+---------+----------+-----------------+------+
    10 rows in set (19 min 16.58 sec)

    足足执行了 19 分钟,这个结果真的让人爽不起来。

    SQL in Spark

    现在我们希望在 Spark 中运行相同的查询,让 Spark 从 MySQL 读取数据。我们创建了一个“数据源”然后执行如下查询:

    scala> val jdbcDF = spark.read.format("jdbc").options(
         |   Map("url" ->  "jdbc:mysql://localhost:3306/ontime?user=root&password=mysql",
         |   "dbtable" -> "ontime.ontime_sm",
         |   "fetchSize" -> "10000",
         |   "partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2015", "numPartitions" -> "48"
         |   )).load()
    16/08/02 23:24:12 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 27; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2015.
    dbcDF: org.apache.spark.sql.DataFrame = [id: int, YearD: date ... 19 more fields]
    scala> jdbcDF.createOrReplaceTempView("ontime")
    scala> val sqlDF = sql("select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10")
    sqlDF: org.apache.spark.sql.DataFrame = [min(yearD): date, max_year: date ... 4 more fields]
    scala> sqlDF.show()
    +----------+--------+-------+--------+---------------+----+
    |min(yearD)|max_year|Carrier|     cnt|flights_delayed|rate|
    +----------+--------+-------+--------+---------------+----+
    |      2003|    2013|     EV| 2962008|         464264|0.16|
    |      2003|    2013|     B6| 1237400|         187863|0.15|
    |      2006|    2011|     XE| 1615266|         230977|0.14|
    |      2003|    2005|     DH|  501056|          69833|0.14|
    |      2001|    2013|     MQ| 4518106|         605698|0.13|
    |      2003|    2013|     FL| 1692887|         212069|0.13|
    |      2004|    2010|     OH| 1307404|         175258|0.13|
    |      2006|    2013|     YV| 1121025|         143597|0.13|
    |      2003|    2006|     RU| 1007248|         126733|0.13|
    |      1988|    2013|     UA|10717383|        1327196|0.12|
    +----------+--------+-------+--------+---------------+----+

    Spark-shell 并不会显示查询的执行时间,这个可以从 spark-sql 提供的 Web UI 中获取到。我在 spark-sql 中重新执行相同的查询:

    ./bin/spark-sql --driver-memory 4G  --master spark://thor:7077
    spark-sql> CREATE TEMPORARY VIEW ontime
             > USING org.apache.spark.sql.jdbc
             > OPTIONS (
             >      url  "jdbc:mysql://localhost:3306/ontime?user=root&password=",
             >      dbtable "ontime.ontime_part",
             >      fetchSize "1000",
             >      partitionColumn "yearD", lowerBound "1988", upperBound "2014", numPartitions "48"
    16/08/04 01:44:27 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 26; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2014.
    Time taken: 3.864 seconds
    spark-sql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10;
    16/08/04 01:45:13 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
    2003    2013    EV      2962008 464264  0.16
    2003    2013    B6      1237400 187863  0.15
    2006    2011    XE      1615266 230977  0.14
    2003    2005    DH      501056  69833   0.14
    2001    2013    MQ      4518106 605698  0.13
    2003    2013    FL      1692887 212069  0.13
    2004    2010    OH      1307404 175258  0.13
    2006    2013    YV      1121025 143597  0.13
    2003    2006    RU      1007248 126733  0.13
    1988    2013    UA      10717383        1327196 0.12
    Time taken: 139.628 seconds, Fetched 10 row(s)

    可以看到查询的时间足足快了 10 倍之多(同一台机器,只有一台机器)。但是到底这些查询是怎么变成 MySQL 查询的呢?然后为什么这样的查询会快那么多。让我们深入到 MySQL 一探究竟:

    深入 MySQL

    Spark:

    scala> sqlDF.show()
    [Stage 4:>                                                        (0 + 26) / 26]

    MySQL:

    mysql> select id, info from information_schema.processlist where info is not NULL and info not like '%information_schema%';
    +-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | id    | info                                                                                                                                                                                                                                                    |
    +-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | 10948 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2001 AND yearD < 2002) |
    | 10965 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2007 AND yearD < 2008) |
    | 10966 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1991 AND yearD < 1992) |
    | 10967 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1994 AND yearD < 1995) |
    | 10968 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1998 AND yearD < 1999) |
    | 10969 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2010 AND yearD < 2011) |
    | 10970 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2002 AND yearD < 2003) |
    | 10971 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2006 AND yearD < 2007) |
    | 10972 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1990 AND yearD < 1991) |