Title: Accelerating k-Means on GPU with CUDA Programming
Abstract: Abstract We accelerate basic k-Means algorithm using CUDA GPU, a new programming model by NVIDIA, and experiment data shows we achieve a maximum speedup of 67.752, while other teams claim 20 to 40. Also we find that the basic k-Means algorithm is most sensitive to the cluster size k, and less to the datasets size b and least to the dimension d. In addition, we find the CUDA shared memory improves the performance, but also depends on which factor we scale.