My Attempt at Outperforming Deepmind’s Atari Results – UPDATE 3

Greetings!

Update time!

As in my previous update, I am still having problems with the function approximation not being exact enough. I found some papers describing how using function approximation with Q learning can lead to over-estimation of the Q values. Indeed, I noticed that when I changed the reward function, it would typically either not decay from the highest Q values or do so very slowly. To remedy this, I tried force-decaying Q values over time upon state visitation, but this didn’t really help.

So after that, I attempted getting a more exact function approximation. I tried using radial basis function networks instead of the plain old feed-forward neural networks. Results here are promising, I immediately noticed a reduction in temporal difference error values (which would indicate it predicting properly more often). As far as I know, radial basis function networks often have their prototypes/centers pre-trained using K-means clustering. However, since I am developing an entirely online algorithm, I had to adapt it to work continuously. To do so, I implemented a sort of competitive unsupervised learning strategy that mirrors an online form of K-means clustering. I tested it on some benchmark supervised learning tasks, and I got much faster learning rates than with feed-forward neural networks (I was able to consistently learn an XOR in 30 updates).

I also began taking a deeper look into policy gradient methods, since these are supposedly less affected by inaccuracies in function approximation. I may end up using a natural policy gradient method.

On the HTM front, I only did some tuning of parameters. I am actually quite pleasantly surprised by how well it works, with all the scrutiny it is under I expected worse. It produces stable patterns (same state gives same pattern, with some generalization allowed) for my function approximators. We will soon see how well it scales up though 😉

Here is a video of it predicting the motion of a box. It is a bit old, I made it when I first got it working, but I thought I should share it anyways. The left side is input, the right side is output. When the left is blue, it is the “true” input, when it is red, it is the distributed representation.

That’s it for this update, progress is being made, slowly but surely!

Here is a link to the source code. The repository containts many agents, the one these posts are focusing on is called HTMRL and can be found in the directory of the same name! https://github.com/222464/AILib

My Attempt at Outperforming Deepmind’s Atari Results – UPDATE 2

Hello again!

I am now (almost) satisfied with the performance of the system on pole balancing. The main issue I am having right now is with the function approximators for the actor/critic. They are slow and lack enough precision. Fortunately though, the switch to advantage(lambda) learning helped remedy the inaccuracy in the function approximation to some extent. I need massive Tau values to get decent performance, so increasing the accuracy of the function approximators is a top priority.

Aside from that, I worked on the generalization capabilities of the function approximators. I added regularization and early stopping. I also started using a combination of epsilon-greedy and softmax action selection.

I also experimented with using separate function approximators for each action (for the discrete action version). The intuition behind this is that this will reduce error interference and make it easier to learn the larger differences in outputs that result from advantage learning.

I added plotting capabilities using sfml-plot, allowing me to better observe what effects my changes have.

Here is an image of the pole balancing experiment with plotting:

Until next time!

My Attempt at Outperforming Deepmind’s Atari Results – UPDATE 1

Hello again!

Time for an update!

The first thing I did after my original post was try to optimize HTMRL in order to get more gradient descent updates for the actor/critic portion. I started by experimenting with various numbers of HTM layers and sizes such that the last layer (the input to the actor/critic) was of an appropriate size that was small enough to run fast and large enough to convey enough information.

Secondly, I stopped using vanilla stochastic gradient descent for the actor/critic and started using RMSProp, just like in the original DeepMind paper. It was a simple change but quickly gave me better performance.

Thirdly, I started experimenting with ways to convert the binary information outputted by the last HTM region for the actor/critic into a floating-point representation that preserves information but does not hurt generalization at the same time. Originally I did a straight up binary to floating point value conversion, but this makes similar HTM configurations result in vastly different outputs, which greatly hurts generalization. So, instead I then opted for the lossy but generalizing approach: to sum the inputs and divide by the maximum count. This doesn’t take all the positional information of the input into account, but with a small enough condensing radius it can provide a decent compression/loss tradeoff.

Fourth, I played with some advantage learning replacements for standard Q learning. Advantage learning functions better then standard Q learning in continuous environments, since it amplifies the differences in state-action value between the successive timesteps more. This makes it less susceptible to errors in the function approximation as well as noise.

Finally, I received help from the SFML community in making the code work on more platforms. They even added CMake support for me! Many thanks!

That’s it for this update, time to get back to coding!

My Attempt at Outperforming Deepmind’s Atari Results

Hello!

I am a reinforcement learning hobbyist, and I made myself a challenge: To outperform DeepMind’s reinforcement learning agent in the Arcade Learning Environment!

I will be posting updates here as I go!

The codebase I am using (currently separate from the ALE for easier experimentation) is available here:https://github.com/222464/AILib

It contains a large number of agents, the one I am currently working on is called HTMRL (hierarchical temporal memory reinforcement learning).

What do I do differently?

First off, I do not have a fixed time window of previous inputs to “solve” the hidden state problem. Rather, I am using HTM (hierarchical temporal memory) to form a context for the input automatically, as well as compress the input down to a manageable number of features.

From there, I use a simple feed-forward neural network(s) to be the actor/critic (I have two versions, one with an actor for continuous actions, not necessary for the ALE). These take the output of the last HTM region as input.

The critic-only version (discrete action) updates using standard Q learning updates plus eligibility traces.

The actor-critic version maintains a Q function in the critic and uses a form of policy gradient to optimize the actor on the Q values.

Right now I am working on getting it to function flawlessly on the pole balancing task before scaling up to the ALE.

Here is an image of HTMRL performing pole balancing. The top right shows the highest-level HTM region.

More coming soon!