My Attempt at Outperforming Deepmind’s Atari Results – UPDATE 3

Greetings!

Update time!

As in my previous update, I am still having problems with the function approximation not being exact enough. I found some papers describing how using function approximation with Q learning can lead to over-estimation of the Q values. Indeed, I noticed that when I changed the reward function, it would typically either not decay from the highest Q values or do so very slowly. To remedy this, I tried force-decaying Q values over time upon state visitation, but this didn’t really help.

So after that, I attempted getting a more exact function approximation. I tried using radial basis function networks instead of the plain old feed-forward neural networks. Results here are promising, I immediately noticed a reduction in temporal difference error values (which would indicate it predicting properly more often). As far as I know, radial basis function networks often have their prototypes/centers pre-trained using K-means clustering. However, since I am developing an entirely online algorithm, I had to adapt it to work continuously. To do so, I implemented a sort of competitive unsupervised learning strategy that mirrors an online form of K-means clustering. I tested it on some benchmark supervised learning tasks, and I got much faster learning rates than with feed-forward neural networks (I was able to consistently learn an XOR in 30 updates).

I also began taking a deeper look into policy gradient methods, since these are supposedly less affected by inaccuracies in function approximation. I may end up using a natural policy gradient method.

On the HTM front, I only did some tuning of parameters. I am actually quite pleasantly surprised by how well it works, with all the scrutiny it is under I expected worse. It produces stable patterns (same state gives same pattern, with some generalization allowed) for my function approximators. We will soon see how well it scales up though 😉

Here is a video of it predicting the motion of a box. It is a bit old, I made it when I first got it working, but I thought I should share it anyways. The left side is input, the right side is output. When the left is blue, it is the “true” input, when it is red, it is the distributed representation.

That’s it for this update, progress is being made, slowly but surely!

Here is a link to the source code. The repository containts many agents, the one these posts are focusing on is called HTMRL and can be found in the directory of the same name! https://github.com/222464/AILib

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s