Massively parallel self-organizing maps: accelerate training on multicore CPUs, GPUs, and clusters
![](https://www.deeplearningdaily.com/wp-content/uploads/2021/09/massively-parallel-self-organizing-maps-accelerate-training-on-multicore-cpus-gpus-and-clusters_61314d031e684-375x210.jpeg)
Somoclu is a massively parallel implementation of self-organizing maps. It exploits multicore CPUs, it is able to rely on MPI for distributing the workload in a cluster, and it can be accelerated by CUDA. A sparse kernel is also included, which is useful for training maps on vector spaces generated in text mining processes.
Key features:
- Fast execution by parallelization: OpenMP, MPI, and CUDA are supported.
- Multi-platform: Linux, macOS, and Windows are supported.
- Planar and toroid maps.
- Rectangular and hexagonal grids.
- Gaussian and bubble neighborhood functions.
- Both dense and sparse input data are supported.
- Large maps of several hundred thousand neurons are feasible.
- Integration with Databionic ESOM Tools.
- Python, R, Julia, and MATLAB interfaces for the dense CPU and GPU kernels.
For more information, refer to the manuscript about the library [1].
Basic Command Line