Avg error updating in progress

With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours.

So making the right choice when it comes to buying a GPU is critical.

Excited by what deep learning can do with GPUs I plunged myself into multi-GPU territory by assembling a small GPU cluster with Infini Band 40Gbit/s interconnect.

I was thrilled to see if even better results can be obtained with multiple GPUs.

However, I also found that parallelization can be horribly frustrating.If you want to parallelize on one machine then your options are mainly CNTK, Torch, Pytorch.These library yield good speedups (3.6x-3.8x) and have predefined algorithms for parallelism on one machine across up to 4 GPUs.There are other libraries which support parallelism, but these are either slow (like Tensor Flow with 2x-3x) or difficult to use for multiple GPUs (Theano) or both.If you put value on parallelism I recommend using either Pytorch or CNTK.

Leave a Reply