![]() ![]() GPU works best on large scale and dense datasets. If you have multiple GPUs, make sure to set gpu_platform_id and gpu_device_id to use the desired GPU.Īlso make sure your system is idle (especially when using a shared computer) to get accuracy performance measurements. You want to run a few datasets that we have verified with good speedup (including Higgs, epsilon, Bosch, etc) to ensure your setup is correct. These GPUs have been discontinued for years and are rarely seen nowadays. They don’t support hardware atomic operations in local memory space and thus histogram construction will be slow.ĪMD VLIW4-based GPUs, including Radeon HD 6xxx series and earlier GPUs. NVIDIA Kepler (K80, K40, K20, most GeForce GTX 700 series GPUs) or earlier NVIDIA GPUs. Using the following hardware is discouraged: NVIDIA Tesla M40 with driver 375.39 and CUDA 7.5 on Ubuntu 16.04 NVIDIA Titan X (Pascal) with driver 367.48 and CUDA 8.0 on Ubuntu 16.04 NVIDIA GTX 1080 with driver 375.39 and CUDA 8.0 on Ubuntu 16.10 We have tested the GPU implementation on the following GPUs:ĪMD RX 480 with AMDGPU-pro driver 16.60 on Ubuntu 16.10ĪMD R9 280X (aka Radeon HD 7970) with fglrx driver 15.302.2301 on Ubuntu 16.10 Most AMD GPUs released after 2012 and NVIDIA GPUs released after 2014 should be supported. We target AMD Graphics Core Next (GCN) architecture and NVIDIA Maxwell and Pascal architectures. GPU algorithm implementation is based on OpenCL and can work with a wide range of GPUs. GPU acceleration also works in distributed learning settings. The implementation is highly modular, and works for all learning tasks (classification, ranking, regression, etc). We use an efficient algorithm on GPU to accelerate this process. In LightGBM, the main computation cost during training is building the feature histograms. GPU Tuning Guide and Performance Comparison How It Works? ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |