cuDNN LSTM Engine Now Supported to learn Shakespeare 5x faster!

In our latest release, version 0.10.0.140, we have added CUDNN engine support to the LSTM layer to solve the Char-RNN 5x faster than when using the CAFFE engine. As described in our last post, the CAFFE version (originally created by Donahue et al. [1]) uses an internal Unrolled Net to implement the recurrent nature of …

Recurrent Learning Now Supported with cuDNN 7.4.1 on Char-RNN to learn Shakespeare!

In our latest release, version 0.10.0.122, we now support Recurrent Learning with both the LSTM [1] and LSTM_SIMPLE [2] layers to solve the Char-RNN as described by [3] and inspired by adepierre [4] and create a Shakespeare sonnet, and do so with the recently released CUDA 10.0.130/cuDNN 7.4.1.

To create the Shakespeare above, we …

Policy Gradient Reinforcement Learning Now Supported with cuDNN 7.3.1 on an ATARI Gym!

In our latest release, version 0.10.0.76, we now support multi-threaded, Policy Gradient Reinforcement Learning on the Arcade-Learning-Environment [4] (based on the ATARI 2600 emulator [5]) as described by Andrej Karpathy[1][2][3], and do so with the recently released CUDA 10.0.130/cuDNN 7.3.1. Using the simple Sigmoid based policy gradient reinforcement learning model shown below… … the SignalPop …

Softmax based Policy Gradient Reinforcement Learning Now Supported with CUDA 10!

In our latest release, version 0.10.0.24, we now support multi-threaded, SoftMax based Policy Gradient Reinforcement Learning as described by Andrej Karpathy[1][2][3], and do so with the recently released CUDA 10.0.130/cuDNN 7.3. Using the simple SoftMax based policy gradient reinforcement learning model shown below… … the SignalPop AI Designer uses the MyCaffeTrainerRL to train the model …

Policy Gradient Reinforcement Learning Now Supported!

In our latest release, version 0.9.2.188, we now support Policy Gradient Reinforcement Learning as described by Andrej Karpathy[1][2][3], and do so with the recently released CUDA 9.2.148 (p1)/cuDNN 7.2.1. For training, we have also added a new Gym infrastructure to the SignalPop AI Designer, where the dataset in each project can either be a standard …

Deep Convolutional Auto-Encoders for MNIST Now Supported!

In our latest release, version 0.9.2.122, we now support deep convolutional auto-encoders with pooling as described by [1], and do so with the new ly released CUDA 9.2.148/cuDNN 7.1.4. Auto-encoders are models that learn how to re-create the input fed into them.  In our example shown here, the MNIST dataset is fed into our model,… …