accelerating machine learning using blis - home -...

6

Click here to load reader

Upload: vantu

Post on 17-Jun-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Accelerating Machine Learning using BLIS - Home - AMDdeveloper.amd.com/.../2013/12/Accelerating-Machine-Learning-usin… · Accelerating Machine Learning using BLIS ... critical to

Accelerating Machine Learning using BLIS

Santanu Thangaraj, Kiran Varaganti, Kiran Puttur, Pradeep Rao

Advanced Micro Devices, Inc

Introduction: Taking advantage of low latency and hierarchical memory architecture of x86 is

critical to boost the performance of computational intensive applications such as deep

learning algorithms in AMD platforms. Machine Learning (ML) algorithms are primarily

built on top of basic linear algebra subprograms (BLAS). Hence performance of these

linear algebra routines directly impact the performance of ML algorithms. In our

experiments we use Caffe [4], a deep learning framework implementation and compare its

performance by linking against BLAS libraries such as BLIS [7] and OpenBLAS [9].

Existing BLIS library performs poorly when benchmarked with Caffe’s handwritten

digits recognition (MNIST challenge [1], [10]) deep layer model. We addressed this

shortcoming in BLIS library and optimize the library to perform better for machine learning

frameworks. We refer to optimized BLIS library as AMD optimized BLIS library

BLAS specifications:

BLAS is a specification that prescribes a set of low-level routines for performing

common linear algebra operations such as vector addition, scalar multiplication, dot

products, linear combinations, and matrix multiplication. They are the de facto standard

low-level routines for linear algebra libraries; the routines have bindings for both C and

FORTRAN. Although the BLAS specification is general, BLAS implementations are often

optimized for speed on a particular machine, so using them can bring substantial

performance benefits. BLAS implementations will take advantage of floating point

hardware such as vector registers and SIMD instructions. Examples of BLAS libraries

include: OpenBLAS, University of Texas Austin’s BLAS-like Library Instantiation

Software Framework (BLIS) and Intel Math Kernel Library (MKL).

Basic linear algebraic operations exposed by BLAS libraries forms the crucial

component of Machine Learning algorithms. Many machine learning frameworks

including Caffe [10], depend on BLAS libraries to provide the required linear algebra

functionality and can link to any of the standard BLAS libraries.

AMD adopted BLIS as its new BLAS library. AMD will provide optimized BLIS

library for their microprocessors based on the new x86 architecture codenamed “Zen”.

Page 2: Accelerating Machine Learning using BLIS - Home - AMDdeveloper.amd.com/.../2013/12/Accelerating-Machine-Learning-usin… · Accelerating Machine Learning using BLIS ... critical to

BLIS framework was designed to isolate essential kernels of computation that, when

optimized, immediately enable optimized implementations of most of its commonly used

and computationally intensive operations. BLIS is written in ISO C99 and available under

a new/modified/3-clause BSD license.

Deep Learning with Caffe Convolutional Neural Networks (CNNs) are successful class of DNNs. CNNs are

computed using dense kernels that differ from traditional dense linear algebra routines.

Accordingly, modern deep learning frameworks such as Caffe provides suites of custom

kernels that implement basic operations such as tensor convolutions, activation functions

and pooling. These routines represent the bulk of the computations when training a CNN,

and thus account for the majority of its execution time. The deep learning community has

been successful in finding optimized implementations of these kernels, but as the

underlying architectures evolve, these kernels must be re-optimized, which is a significant

investment. Optimizing these kernels requires a deep understanding of the underlying

processor architecture, with careful scheduling of data movement, on-chip memory

placement, register blocking, and other optimizations in order to get acceptable

performance.

Role of BLAS in DNN: The most important computational primitive in CNNs is a special form of batched

convolution called spatial convolution [1] [5].

There are two inputs to the convolution: 𝐷 ∈ ℝ𝑁𝐶𝐻𝑊 , which forms the input data,

and 𝐹 ∈ ℝ𝐾𝐶𝑅𝑆, which forms the convolutional filters. The input data ranges over N images

in a mini batch, C input feature maps, H rows per image, and W columns per image. The

filters range over K output feature maps, C input feature maps, R rows per filter, and S

columns per filter. Computing this convolution involves a seven-way nested loop, with

four independent loops and three accumulation loops [5]. There are many ways of

implementing this computation. The Caffe MNIST benchmark training algorithm

implements by lowering the convolutions onto a matrix multiplication (GEMM). The

GEMM gets invoked for small matrix sizes. Therefore the performance of small matrix

GEMM directly impacts the performance of the training algorithm. The optimized GEMM

implementations are provided by BLAS libraries.

Small matrix GEMM optimization:

The BLIS library has six loops [12] around the GEMM computation, with the outer

loop parameters dependent on L3 cache size while the inner loops dependent on L1/L2

cache sizes. The packing of data, required by inner loops, is done to avoid TLB misses.

Page 3: Accelerating Machine Learning using BLIS - Home - AMDdeveloper.amd.com/.../2013/12/Accelerating-Machine-Learning-usin… · Accelerating Machine Learning using BLIS ... critical to

This approach gives better performance for really large matrices which does not fit entirely

in the cache system but introduces unnecessary overhead for small matrix computations.

We have optimized GEMM specifically for small matrix cases and observed significant

performance improvements (refer figure 1).

For our benchmarks we have used the Caffe version 1.0.0.rc3, OpenBLAS 0.2.20,

BLIS 0.2.1 public open-source repository and AMD optimized BLIS version x.y (TBD).

The experiments were run on Ubuntu 15.04 operating system.

Figure 1. Results of single thread SGEMM performance with BLIS public, OpenBLAS and BLIS optimized.

Machine: AMD Naples, 64 cores, 256 GB RAM @ 3.2 GHz.

Caffe MNIST performance improvement:

The optimization of GEMM the performance of the Caffe MNIST has improved, as

shown in Figure 2. The performance of the forward pass shows significant improvement

(Lower is better) and Caffe performance as a whole has improved by 17%.

0

10

20

30

40

50

60

5 20 35 50 65 80 95 110 125 140 155 170 185 200 215 230 245 260 275 290

GFL

OP

S

Matrix Size

Small Matrix - SGEMM

BLIS public Zen Optimized BLIS OpenBLAS

Average of 47% improvement

Page 4: Accelerating Machine Learning using BLIS - Home - AMDdeveloper.amd.com/.../2013/12/Accelerating-Machine-Learning-usin… · Accelerating Machine Learning using BLIS ... critical to

Figure 2. Results of single thread Caffe with OpenBLAS, BLIS Public and BLIS optimized. Machine: AMD

Zen Naples, 64 cores, 256 GB RAM @ 3.2 GHz.

*lesser the time, better is the performance

Looking at the backward pass performance numbers there is a room for further

improvement. The reason is the significant amount of GEMM calls made during the

backward pass requires transpose of the input matrices. This support is not supported yet

by the small matrix code.

Conclusion: Machine learning, Deep Neural Networks are significantly gaining traction across

the Industries for its application in automating every day chores and bringing AI into

everyday life. Most of the Machine learning frameworks links with BLAS libraries during

compilation. BLAS forms the de facto standard low-level routines for linear algebra

libraries. It is the layer upon which lot of other high level Dense Linear Applications (DLA)

are based. Having highly optimized BLAS library is essential for accelerating the ML

frameworks.

By optimizing the level 1 and level 2 subroutines and small matrix GEMM we are

able to achieve significant boost in performance of Caffe run with MNIST. We could

observe performance benefit is not limited to only Caffe, but also for the LAPACK [8]

routines such as LU, QR and Cholesky.

21.3 19.3 17.8

25.523.5 23.3

0

10

20

30

40

50

BLIS Public OpenBLAS Zen OptimizedBLIS

Tim

e in

ms

BLAS library

Caffe MNIST

forward backward

17% improvement

Page 5: Accelerating Machine Learning using BLIS - Home - AMDdeveloper.amd.com/.../2013/12/Accelerating-Machine-Learning-usin… · Accelerating Machine Learning using BLIS ... critical to

Reference:

[1] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based

learning ´ applied to document recognition. Proceedings of the IEEE, 86(11):2278–

2324, 1998.

[2] Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like

environ- ´ ment for machine learning. In BigLearn, NIPS Workshop, 2011.

https://github.com/torch/torch7.

[3] James Bergstra, Olivier Breuleux, Fred´ eric Bastien, Pascal Lamblin, Razvan

Pascanu, Guil- ´ laume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua

Bengio. Theano: a cpu and gpu math expression compiler. In SciPy, volume 4, page 3,

2010. https://github.com/Theano/Theano.

[4] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross

Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture

for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.

https://github.com/BVLC/caffe.

[5] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran,

NVIDIA cuDNN: Efficient Primitives for Deep Learning.

https://arxiv.org/pdf/1410.0759.pdf

[6] Anatomy of High-performance Matrix multiplication Kazushige Goto, the University

of Texas at Austin. Robert A. Van De Geijn, the University of Texas at Austin

https://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf.

[7] BLAS-like Library Instantiation Software Framework. https://github.com/flame/blis.

[8] LAPACK - Linear Algebra PACKage : http://www.netlib.org/lapack/.

[9] OpenBLAS: An optimized BLAS library. http://www.openblas.ne

[10] Caffe:http://caffe.berkeleyvision.org/gathered/examples/mnist.html.

[11] INTEL MKL: https://software.intel.com/en-us/intel-mkl.

[12] BLIS multithreading: https://github.com/flame/blis/wiki/Multithreading.

Page 6: Accelerating Machine Learning using BLIS - Home - AMDdeveloper.amd.com/.../2013/12/Accelerating-Machine-Learning-usin… · Accelerating Machine Learning using BLIS ... critical to

DISCLAIMER The information contained herein is for informational purposes only, and is subject to change without notice. While every precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described herein. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document. Terms and limitations applicable to the purchase or use of AMD’s products are as set forth in a signed agreement between the parties or in AMD's Standard Terms and Conditions of Sale. AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies. © 2017 Advanced Micro Devices, Inc. All rights reserved.