Jump to content

AI - Artificial Intelligence


Recommended Posts

  • Replies 129
  • Created
  • Last Reply

Top Posters In This Topic

  • 3 weeks later...

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

Anyway, today I've read pretty impressive paper on predicting index X using deep neural nets using 20XX-20XX dataset. Great results. Not sure if this is one of these situations where back testing works and currently the approach stopped working or not. I would have thought that their approach should not have worked on historical data either, since this seems should have been exploited already. They do have training set and test set and some additional checks, so unless they did something "bad", the results seem to be real.

 

Edit: After some thinking, I have some questions/ideas/reservations/inquisitions to test if this really works or not. If anyone is familiar with DNNs and want to run some experiments, shoot me a message and we can play around. Results could be contribution to science or monetary (if you get a model that works, you can just use it to invest  ;D - most likely you won't ). I might do it myself, but I need time and motivation... yeah, yuge pile of money is not motivation enough.  :P  ::)  8)  ;D

 

I'll point to the paper when it becomes publicly available.  8)

Link to post
Share on other sites

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

 

If you hit "modify" on the original post in the thread, you can edit the title.

Link to post
Share on other sites

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

Anyway, today I've read pretty impressive paper on predicting index X using deep neural nets using 20XX-20XX dataset. Great results. Not sure if this is one of these situations where back testing works and currently the approach stopped working or not. I would have thought that their approach should not have worked on historical data either, since this seems should have been exploited already. They do have training set and test set and some additional checks, so unless they did something "bad", the results seem to be real.

 

Edit: After some thinking, I have some questions/ideas/reservations/inquisitions to test if this really works or not. If anyone is familiar with DNNs and want to run some experiments, shoot me a message and we can play around. Results could be contribution to science or monetary (if you get a model that works, you can just use it to invest  ;D - most likely you won't ). I might do it myself, but I need time and motivation... yeah, yuge pile of money is not motivation enough.  :P  ::)  8)  ;D

 

I'll point to the paper when it becomes publicly available.  8)

 

I have decent amount of expeirience with deep neural nets but mostly in LSTMs and not conv nets.  My friend has a ai trading strat that is incredibly successful, but I told him if I worked,in the area I'd find something else.  PM me if interested in discussing more. 

Link to post
Share on other sites

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

Anyway, today I've read pretty impressive paper on predicting index X using deep neural nets using 20XX-20XX dataset. Great results. Not sure if this is one of these situations where back testing works and currently the approach stopped working or not. I would have thought that their approach should not have worked on historical data either, since this seems should have been exploited already. They do have training set and test set and some additional checks, so unless they did something "bad", the results seem to be real.

 

Edit: After some thinking, I have some questions/ideas/reservations/inquisitions to test if this really works or not. If anyone is familiar with DNNs and want to run some experiments, shoot me a message and we can play around. Results could be contribution to science or monetary (if you get a model that works, you can just use it to invest  ;D - most likely you won't ). I might do it myself, but I need time and motivation... yeah, yuge pile of money is not motivation enough.  :P  ::)  8)  ;D

 

I'll point to the paper when it becomes publicly available.  8)

 

Just cross validation during bull market years? I've played around with it a bit but never been comfortable enough with the algo [even worst with NNs]. I'm very scared of blowing up with these over-fitted models that have only seen rising markets...

 

I think the main criticism against these "paper" strategies is you have 1000s of academics looking for signals and the winners publish a paper. The signals they find basically are the result of survivor bias.

 

Do you guys have slack? Maybe its time we start a CoBF slack group 

Link to post
Share on other sites

Just cross validation during bull market years?

 

No. First, you can't - or you shouldn't - cross validate in conventional sense, since you should not apply algo trained on the future data to the past data. At least these guys don't do it and IMO that's a correct approach. I guess you can do limited cross validation with that restriction, but it limits your data amount possibly a lot, which means worse training and worse results I'd guess (see below too).

 

They do something clever, but they don't explain it well, so it may be "really smart" or "really not helpful" .

 

I've played around with it a bit but never been comfortable enough with the algo [even worst with NNs]. I'm very scared of blowing up with these over-fitted models that have only seen rising markets...

 

Well, first I assume you already split data into training/dev/test set, yes? So you should not overfit, otherwise you won't get good results on test data.

Second, why you assume that you train/test on only rising markets? There's data for more than that... you can test on data that includes 2007-2009 crash, no? :)

 

Though I agree that there are issues:

1. Depending on what you are training/etc. there might be not-much data. E.g. if you train on daily prices, and you have 10 years of data, that's ~3.5K data points, which is quite low when you think about NN training sets.

2. Even if you include times with crashes, there might be only 1-2 big crashes per training/test data, which is also quite sparse in terms of data... so yeah, there's a risk that a "different" downturn/crash/whatever may not be handled (well).

 

I think the main criticism against these "paper" strategies is you have 1000s of academics looking for signals and the winners publish a paper. The signals they find basically are the result of survivor bias.

 

I don't think this is the case, but as I said, I have other reservations.

 

Do you guys have slack? Maybe its time we start a CoBF slack group

 

No, I don't have slack. 8) But if you guys want to move further discussion to a limited group, we can organize something.  8)

Link to post
Share on other sites

Wow, so good luck with this? This should be a really, really hard thing to do without a good dataset. I imagine most quant funds (e.g. Renaissance or Bridgewater) have access to large, high quality datasets. I seriously doubt you will do well running LSTMs on widely available datasets (also, don’t use LSTMs - use GRUs)

 

Anyway, if you are serious about this, a good place to start for tools is probably Quantopian. I know one of the principals there and I don’t think I can vouch for their financial market chops but there toolsets are probably pretty good (i.e. their python interfaces)

Link to post
Share on other sites

Wow, so good luck with this? This should be a really, really hard thing to do without a good dataset. I imagine most quant funds (e.g. Renaissance or Bridgewater) have access to large, high quality datasets. I seriously doubt you will do well running LSTMs on widely available datasets (also, don’t use LSTMs - use GRUs)

 

Anyway, if you are serious about this, a good place to start for tools is probably Quantopian. I know one of the principals there and I don’t think I can vouch for their financial market chops but there toolsets are probably pretty good (i.e. their python interfaces)

 

I think you are right that the competition in this is growing exponentially (lol, I just love when someone says "exponentially" especially in bull case writeup  ;D I need to do this exponentially more often  ;D ). And that's definitely an issue, because backtesting will mostly test on data where there were way fewer competitors affecting the market. So you may get good historical results, but the future performance will be crap. Of course what really matters is if the real-money algos are saturating and killing the prediction edge or not. This is tough to measure. I'm sure the hardcore funds have some kind of metrics of noticing when algo gets "exhausted" to shut it off or whatever. But this is where basic theory is not sufficient I'd say.

 

Anyway, thanks for input.

Link to post
Share on other sites
  • 2 weeks later...

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

Anyway, today I've read pretty impressive paper on predicting index X using deep neural nets using 20XX-20XX dataset. Great results. Not sure if this is one of these situations where back testing works and currently the approach stopped working or not. I would have thought that their approach should not have worked on historical data either, since this seems should have been exploited already. They do have training set and test set and some additional checks, so unless they did something "bad", the results seem to be real.

 

Edit: After some thinking, I have some questions/ideas/reservations/inquisitions to test if this really works or not. If anyone is familiar with DNNs and want to run some experiments, shoot me a message and we can play around. Results could be contribution to science or monetary (if you get a model that works, you can just use it to invest  ;D - most likely you won't ). I might do it myself, but I need time and motivation... yeah, yuge pile of money is not motivation enough.  :P  ::)  8)  ;D

 

I'll point to the paper when it becomes publicly available.  8)

 

Just cross validation during bull market years? I've played around with it a bit but never been comfortable enough with the algo [even worst with NNs]. I'm very scared of blowing up with these over-fitted models that have only seen rising markets...

 

I think the main criticism against these "paper" strategies is you have 1000s of academics looking for signals and the winners publish a paper. The signals they find basically are the result of survivor bias.

 

Do you guys have slack? Maybe its time we start a CoBF slack group 

 

So I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing.

It's quite possible I am not doing something the same way they did.

As I said before, I'll link to their paper once it's publicly available and someone else might be able to replicate their results ... or not.

I may also post or send my implementation to anyone interested after the paper is publicly available so people can shoot holes in what I did... Although I don't promise to clean up the code hugely... Right now it's a prototype-level mess.  8)

 

Link to post
Share on other sites

I think where AI can really be helpful is to predict earnings and then you can use these to build better value portfolios. If you feed an AI with the noise of the markets you will get all types of correlations that don`t hold up in reality. I read an article not long ago on this where they reduced the analysts error rate on earnings projections from 40% to 20%. I can imagine that when you use additional data like credit card information you can get really good earnings forecasts.

Personally i wouldn`t trust an AI black box and i am 100% sure that i would leave that approach when the first larger drawdown happens. Its really hard to determine if you just have a "normal" drawdown or if the model has stopped working, so in the end the human will always be the weak link in this regardless of how automated the whole approach is.

Link to post
Share on other sites

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

Anyway, today I've read pretty impressive paper on predicting index X using deep neural nets using 20XX-20XX dataset. Great results. Not sure if this is one of these situations where back testing works and currently the approach stopped working or not. I would have thought that their approach should not have worked on historical data either, since this seems should have been exploited already. They do have training set and test set and some additional checks, so unless they did something "bad", the results seem to be real.

 

Edit: After some thinking, I have some questions/ideas/reservations/inquisitions to test if this really works or not. If anyone is familiar with DNNs and want to run some experiments, shoot me a message and we can play around. Results could be contribution to science or monetary (if you get a model that works, you can just use it to invest  ;D - most likely you won't ). I might do it myself, but I need time and motivation... yeah, yuge pile of money is not motivation enough.  :P  ::)  8)  ;D

 

I'll point to the paper when it becomes publicly available.  8)

 

Just cross validation during bull market years? I've played around with it a bit but never been comfortable enough with the algo [even worst with NNs]. I'm very scared of blowing up with these over-fitted models that have only seen rising markets...

 

I think the main criticism against these "paper" strategies is you have 1000s of academics looking for signals and the winners publish a paper. The signals they find basically are the result of survivor bias.

 

Do you guys have slack? Maybe its time we start a CoBF slack group 

 

So I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing.

It's quite possible I am not doing something the same way they did.

As I said before, I'll link to their paper once it's publicly available and someone else might be able to replicate their results ... or not.

I may also post or send my implementation to anyone interested after the paper is publicly available so people can shoot holes in what I did... Although I don't promise to clean up the code hugely... Right now it's a prototype-level mess.  8)

 

The dirty secret in AI research is everyone is secretly overfitting their ANNs by by fiddling with the archtecture of the model and peaking at test set results.  Only the papers with actual impressive results get published so you have a publication bias.  Doesn't mean a lot of techniques don't work but they likely don't work as well as the paper would lead you to believe. 

Link to post
Share on other sites

Looks like the name of the thread was not great. Can't find it using search function.  :( Should have named AI - Artificial Intelligence.  ::) I don't think I can rename right now.

 

Anyway, today I've read pretty impressive paper on predicting index X using deep neural nets using 20XX-20XX dataset. Great results. Not sure if this is one of these situations where back testing works and currently the approach stopped working or not. I would have thought that their approach should not have worked on historical data either, since this seems should have been exploited already. They do have training set and test set and some additional checks, so unless they did something "bad", the results seem to be real.

 

Edit: After some thinking, I have some questions/ideas/reservations/inquisitions to test if this really works or not. If anyone is familiar with DNNs and want to run some experiments, shoot me a message and we can play around. Results could be contribution to science or monetary (if you get a model that works, you can just use it to invest  ;D - most likely you won't ). I might do it myself, but I need time and motivation... yeah, yuge pile of money is not motivation enough.  :P  ::)  8)  ;D

 

I'll point to the paper when it becomes publicly available.  8)

 

Just cross validation during bull market years? I've played around with it a bit but never been comfortable enough with the algo [even worst with NNs]. I'm very scared of blowing up with these over-fitted models that have only seen rising markets...

 

I think the main criticism against these "paper" strategies is you have 1000s of academics looking for signals and the winners publish a paper. The signals they find basically are the result of survivor bias.

 

Do you guys have slack? Maybe its time we start a CoBF slack group 

 

So I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing.

It's quite possible I am not doing something the same way they did.

As I said before, I'll link to their paper once it's publicly available and someone else might be able to replicate their results ... or not.

I may also post or send my implementation to anyone interested after the paper is publicly available so people can shoot holes in what I did... Although I don't promise to clean up the code hugely... Right now it's a prototype-level mess.  8)

 

The dirty secret in AI research is everyone is secretly overfitting their ANNs by by fiddling with the archtecture of the model and peaking at test set results.  Only the papers with actual impressive results get published so you have a publication bias.  Doesn't mean a lot of techniques don't work but they likely don't work as well as the paper would lead you to believe.

 

This was brought up upthread. In general it is true.

 

I don't think this is what's happening in this case though, but I'd rather not get into abstract discussions of why I don't think that's the case. OTOH, I can't really explain their results either, so who knows. Let's push out this discussion until you guys have the paper.  ;)

Link to post
Share on other sites

 

You could try a reinforcement learning approach rather than just a supervised learning approach.  The upside here is the algorithm could learn to deal with risks and optimize a portfolio.  The methods discussed in the openai post TRPO  and PPO are very powerful both theoretically and practically and PPO is really easy to implement. 

Link to post
Share on other sites

 

You could try a reinforcement learning approach rather than just a supervised learning approach.  The upside here is the algorithm could learn to deal with risks and optimize a portfolio.  The methods discussed in the openai post TRPO  and PPO are very powerful both theoretically and practically and PPO is really easy to implement.

 

I don't know reinforcement learning in depth. I wonder if there's enough data to run RL on stock prices. Unless you do it on intraday pricing, which I don't really want to do. I think it's the same issue as with supervised learning: 10 years of daily data is only 3500 data points or so. With only 2-3 crashes in data set.

 

But I'd have to read up on RL to see if there's a way to apply it. If/when I have time. Thanks for bringing it up. 8)

Link to post
Share on other sites

A few additions, as one of my partners has significant expertise in this area.

 

Trying to predict outcome from a disparate data flow is a fools game.

At best the prediction is just a more precise guess, but it just has to be better than the other guys. Back testing is typically against a VaR model, with an AI algorithm that is 'fitted' to the data. Hence the predictive power has to be truly awful to fail the test parameters, yet most do. They can all predict a number, but the +/- standard deviation is so high as to be essentially useless.

 

Noise versus signal is typically addressed by applying opposing white noise (randomly generated) against the source. Viewed on an oscilloscope you would see a flat line with spikes/valleys suggesting signal. Increase the opposing white noise and you will see fewer but stronger signals - if they exist. The process is well understood, and widely used in robotic industial bottling to poduce a 'fill' within preset upper and lower boundaries at a CI of 95% or better.

 

An AI robot continually sniffing, continually sees 'new' signal, and could trade accordingly - we call this 'learning'. Problem is that for this to work, the future data stream has to look similar to the 'sampled' historic data stream. The repeated back testing failures tell us that this isn't the case. It also ignores competitors deliberately introducing toxic 'data points' into the market, to screw up your algorithm - & trade against it.

 

End of the day it essentially remains a zero-sum game.

The AI slice of industry profit barely covers its costs, and comes at the cost of smaller slices of instutional and retail clients.

Speed, # of transactions, and trading volumes increase - but no net benefit.

 

Not quite what we're being led to believe.

 

SD

 

 

 

 

 

 

Link to post
Share on other sites

 

You could try a reinforcement learning approach rather than just a supervised learning approach.  The upside here is the algorithm could learn to deal with risks and optimize a portfolio.  The methods discussed in the openai post TRPO  and PPO are very powerful both theoretically and practically and PPO is really easy to implement.

 

I don't know reinforcement learning in depth. I wonder if there's enough data to run RL on stock prices. Unless you do it on intraday pricing, which I don't really want to do. I think it's the same issue as with supervised learning: 10 years of daily data is only 3500 data points or so. With only 2-3 crashes in data set.

 

But I'd have to read up on RL to see if there's a way to apply it. If/when I have time. Thanks for bringing it up. 8)

 

The best returns come from intraday data algorithms.  Not the fundemental type analysis we are all used to.  The reason is these algorithms may be able to average like 10 basis points after costs (just an example your algos probably arent that good).  But if your holding periods are a couple of hours or even minutes, you can make 100%+ in a year which is just not attainable with any longer horizon algorithm. 

Link to post
Share on other sites

 

You could try a reinforcement learning approach rather than just a supervised learning approach.  The upside here is the algorithm could learn to deal with risks and optimize a portfolio.  The methods discussed in the openai post TRPO  and PPO are very powerful both theoretically and practically and PPO is really easy to implement.

 

I don't know reinforcement learning in depth. I wonder if there's enough data to run RL on stock prices. Unless you do it on intraday pricing, which I don't really want to do. I think it's the same issue as with supervised learning: 10 years of daily data is only 3500 data points or so. With only 2-3 crashes in data set.

 

But I'd have to read up on RL to see if there's a way to apply it. If/when I have time. Thanks for bringing it up. 8)

 

The best returns come from intraday data algorithms.  Not the fundemental type analysis we are all used to.  The reason is these algorithms may be able to average like 10 basis points after costs (just an example your algos probably arent that good).  But if your holding periods are a couple of hours or even minutes, you can make 100%+ in a year which is just not attainable with any longer horizon algorithm.

 

Thanks for comments. You are likely right, but I have very little interest in intraday-based algos for variety of reasons. 8)

Link to post
Share on other sites
  • 3 weeks later...

The paper I was talking about is

"Dow Jones Trading with Deep Learning: The Unreasonable Effectiveness of Recurrent Neural Networks"

to be presented at http://insticc.org/node/TechnicalProgram/data/presentationDetails/69221

 

The paper is not publicly available, but you can ask the authors for copy. I have a copy and can send it to people interested, but I won't post it here publicly. PM me if you want a copy.

 

Couple comments on various things previously mentioned now that the paper is semi-public:

 

- The paper predicts daily close of DJIA from daily open value + opens of previous n days (2-10).

- The trading algorithm is simply buy if predicted close > open and sell otherwise. If you cannot buy (already have position), then hold. If you cannot sell (already hold cash), then hold cash.

- Authors use training data from 01/01/2000-06/30/2009 and test data from 07/01/2009 and 12/31/2017. This somewhat answers the critique that training is from bull market: it's not. Testing is not completely from bull market either.

- Authors use pretty much vanilla LSTM, so IMO the critique that "1000s of academics looking for signals and the winners publish a paper" or that they have tweaked/over-fitted the model until it worked does not seem to apply. (It's possible that they messed up somehow and used testing data in training, but they seem to be careful, so it doesn't seem very likely). This is really vanilla IMO without much tweaking at all. Which makes the results surprising BTW.

- I have some other comments, but I'd rather discuss this further with people who have read the paper, so I won't post them now.  8)

 

As I mentioned, I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing, i.e. the accuracy is around 48-52% while they get up to 80% accuracy.

It's quite possible I am not doing something the same way they did.

It's possible that their implementation or testing is messed up somehow too. But it's hard to prove that. Maybe they'll opensource their implementation sometime in the future.  8)

 

If anyone is interested to get together (online through some tools?) and go through the paper and/or my implementation, we can do it. PM me and we'll try to figure out what would work best.  8)

Link to post
Share on other sites

The paper I was talking about is

"Dow Jones Trading with Deep Learning: The Unreasonable Effectiveness of Recurrent Neural Networks"

to be presented at http://insticc.org/node/TechnicalProgram/data/presentationDetails/69221

 

The paper is not publicly available, but you can ask the authors for copy. I have a copy and can send it to people interested, but I won't post it here publicly. PM me if you want a copy.

 

Couple comments on various things previously mentioned now that the paper is semi-public:

 

- The paper predicts daily close of DJIA from daily open value + opens of previous n days (2-10).

- The trading algorithm is simply buy if predicted close > open and sell otherwise. If you cannot buy (already have position), then hold. If you cannot sell (already hold cash), then hold cash.

- Authors use training data from 01/01/2000-06/30/2009 and test data from 07/01/2009 and 12/31/2017. This somewhat answers the critique that training is from bull market: it's not. Testing is not completely from bull market either.

- Authors use pretty much vanilla LSTM, so IMO the critique that "1000s of academics looking for signals and the winners publish a paper" or that they have tweaked/over-fitted the model until it worked does not seem to apply. (It's possible that they messed up somehow and used testing data in training, but they seem to be careful, so it doesn't seem very likely). This is really vanilla IMO without much tweaking at all. Which makes the results surprising BTW.

- I have some other comments, but I'd rather discuss this further with people who have read the paper, so I won't post them now.  8)

 

As I mentioned, I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing, i.e. the accuracy is around 48-52% while they get up to 80% accuracy.

It's quite possible I am not doing something the same way they did.

It's possible that their implementation or testing is messed up somehow too. But it's hard to prove that. Maybe they'll opensource their implementation sometime in the future.  8)

 

If anyone is interested to get together (online through some tools?) and go through the paper and/or my implementation, we can do it. PM me and we'll try to figure out what would work best.  8)

 

I dont know what the authors did but ill reiterate from before vanilla LSTMs do little better than guess on the stock market.  They probably had like 1000 GPU and tested thousands of hyperparameter configurations and "overfit" the test set.  This is why typically papers like this are not believed anymore in the ML literature.  Try adding some stuff like attention or skip connections and whatever else is hot now (I'm not sure) and didnt someone recommend GRUs instead.  I have some other ideas you can use like Gaussian Processes to estimate realtime covariance matrices, but your better off looking at the literature first than trying out hairbrained ideas that might not work.  It's really not a trivial excerse to outperform the market with ML. 

Link to post
Share on other sites

The paper I was talking about is

"Dow Jones Trading with Deep Learning: The Unreasonable Effectiveness of Recurrent Neural Networks"

to be presented at http://insticc.org/node/TechnicalProgram/data/presentationDetails/69221

 

The paper is not publicly available, but you can ask the authors for copy. I have a copy and can send it to people interested, but I won't post it here publicly. PM me if you want a copy.

 

Couple comments on various things previously mentioned now that the paper is semi-public:

 

- The paper predicts daily close of DJIA from daily open value + opens of previous n days (2-10).

- The trading algorithm is simply buy if predicted close > open and sell otherwise. If you cannot buy (already have position), then hold. If you cannot sell (already hold cash), then hold cash.

- Authors use training data from 01/01/2000-06/30/2009 and test data from 07/01/2009 and 12/31/2017. This somewhat answers the critique that training is from bull market: it's not. Testing is not completely from bull market either.

- Authors use pretty much vanilla LSTM, so IMO the critique that "1000s of academics looking for signals and the winners publish a paper" or that they have tweaked/over-fitted the model until it worked does not seem to apply. (It's possible that they messed up somehow and used testing data in training, but they seem to be careful, so it doesn't seem very likely). This is really vanilla IMO without much tweaking at all. Which makes the results surprising BTW.

- I have some other comments, but I'd rather discuss this further with people who have read the paper, so I won't post them now.  8)

 

As I mentioned, I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing, i.e. the accuracy is around 48-52% while they get up to 80% accuracy.

It's quite possible I am not doing something the same way they did.

It's possible that their implementation or testing is messed up somehow too. But it's hard to prove that. Maybe they'll opensource their implementation sometime in the future.  8)

 

If anyone is interested to get together (online through some tools?) and go through the paper and/or my implementation, we can do it. PM me and we'll try to figure out what would work best.  8)

 

I dont know what the authors did but ill reiterate from before vanilla LSTMs do little better than guess on the stock market.  They probably had like 1000 GPU and tested thousands of hyperparameter configurations and "overfit" the test set.  This is why typically papers like this are not believed anymore in the ML literature.  Try adding some stuff like attention or skip connections and whatever else is hot now (I'm not sure) and didnt someone recommend GRUs instead.  I have some other ideas you can use like Gaussian Processes to estimate realtime covariance matrices, but your better off looking at the literature first than trying out hairbrained ideas that might not work.  It's really not a trivial excerse to outperform the market with ML.

 

Ah, I think I see where there is a miscommunication between us. :)

 

My goal is not to outperform market with ML. My goal is to understand whether what is proposed in this paper works and if it does not then why.  8)

 

You are possibly completely right that what authors propose does not work.

I just want to understand how they got the results they got.

 

You've said "probably had like 1000 GPU and tested thousands of hyperparameter configurations and "overfit" the test set." before.

I don't think that's the case at all. If you read the paper - which you haven't so far - you can see that their training is really simple and there's no "thousands of hyperparameter configurations". Which is baffling in itself. I have some suspicions of what could be wrong, but it's not productive to discuss it if you just dismiss the paper offhand. Which is BTW your prerogative - if that's where you stand, that's fine and I won't bother you with this further.  8)

Link to post
Share on other sites

The paper I was talking about is

"Dow Jones Trading with Deep Learning: The Unreasonable Effectiveness of Recurrent Neural Networks"

to be presented at http://insticc.org/node/TechnicalProgram/data/presentationDetails/69221

 

The paper is not publicly available, but you can ask the authors for copy. I have a copy and can send it to people interested, but I won't post it here publicly. PM me if you want a copy.

 

Couple comments on various things previously mentioned now that the paper is semi-public:

 

- The paper predicts daily close of DJIA from daily open value + opens of previous n days (2-10).

- The trading algorithm is simply buy if predicted close > open and sell otherwise. If you cannot buy (already have position), then hold. If you cannot sell (already hold cash), then hold cash.

- Authors use training data from 01/01/2000-06/30/2009 and test data from 07/01/2009 and 12/31/2017. This somewhat answers the critique that training is from bull market: it's not. Testing is not completely from bull market either.

- Authors use pretty much vanilla LSTM, so IMO the critique that "1000s of academics looking for signals and the winners publish a paper" or that they have tweaked/over-fitted the model until it worked does not seem to apply. (It's possible that they messed up somehow and used testing data in training, but they seem to be careful, so it doesn't seem very likely). This is really vanilla IMO without much tweaking at all. Which makes the results surprising BTW.

- I have some other comments, but I'd rather discuss this further with people who have read the paper, so I won't post them now.  8)

 

As I mentioned, I spent a bunch of time reimplementing what these guys presumably implemented.

I do not get their results. My results are pretty much at level of random guessing, i.e. the accuracy is around 48-52% while they get up to 80% accuracy.

It's quite possible I am not doing something the same way they did.

It's possible that their implementation or testing is messed up somehow too. But it's hard to prove that. Maybe they'll opensource their implementation sometime in the future.  8)

 

If anyone is interested to get together (online through some tools?) and go through the paper and/or my implementation, we can do it. PM me and we'll try to figure out what would work best.  8)

 

I dont know what the authors did but ill reiterate from before vanilla LSTMs do little better than guess on the stock market.  They probably had like 1000 GPU and tested thousands of hyperparameter configurations and "overfit" the test set.  This is why typically papers like this are not believed anymore in the ML literature.  Try adding some stuff like attention or skip connections and whatever else is hot now (I'm not sure) and didnt someone recommend GRUs instead.  I have some other ideas you can use like Gaussian Processes to estimate realtime covariance matrices, but your better off looking at the literature first than trying out hairbrained ideas that might not work.  It's really not a trivial excerse to outperform the market with ML.

 

Ah, I think I see where there is a miscommunication between us. :)

 

My goal is not to outperform market with ML. My goal is to understand whether what is proposed in this paper works and if it does not then why.  8)

 

You are possibly completely right that what authors propose does not work.

I just want to understand how they got the results they got.

 

You've said "probably had like 1000 GPU and tested thousands of hyperparameter configurations and "overfit" the test set." before.

I don't think that's the case at all. If you read the paper - which you haven't so far - you can see that their training is really simple and there's no "thousands of hyperparameter configurations". Which is baffling in itself. I have some suspicions of what could be wrong, but it's not productive to discuss it if you just dismiss the paper offhand. Which is BTW your prerogative - if that's where you stand, that's fine and I won't bother you with this further.  8)

 

You are entirely correct that I haven't read the paper and maybe I was too hasty in dismissing the paper.  I wouldn't mind a copy of the paper if you don't mind sending me one. 

 

That being said here is my reasoning in more depth.  The authors seem like they are in ML acadamia, so I made a couple assumptions.  1.) It didnt look like their paper made it to one of the premier conferences and maybe its because they aren't big names but likely its because people have been training LSTMs on stocks for a long time and vanilla LSTMs dont work well and I think everyone in the ML community is suspicious of 80% hit rates using a vanilla LSTM on indices for good reason and they likely didn't do anything special to assume that they didn't just get "lucky" with their model.  the  reason they got "lucky" is number 2) typically papers dont discuss the hyperparameter search they go through to find the exact correct configuration, so even if they didn't say they tested 100s/1000s of hyperparameters they might have and likely did (although yes i didnt read the paper). Unless they specifically say there were few or no hyperparameters to test or they tested only a few of them, you should assume they did test many.  This is a  dirty secret in ML, you come up with a new technique and you dont stop testing hyperparameter choices the model until you get good results on both the test set and validation set.  Then you submit to to a journal saying this method did really well because it outperformed on both the validation set and test set.  But you stopped right after you get a hyperparameter choice that met those criteria which strongly bias your results upward.  This is related to p-hacking.  This is a perfectly natural, but bad thing people do and usually means most papers have performance that can't be matched when trying to reproduce them.  You can pick basically any method of the thousands that have been proposed and if it doesn't have over 1000 citations (and the method actually seems useful) this is probably one reason why. 

 

Now you maybe you are right and something else may be missing, but if I had to guess I think its a good chance the authors just got "lucky".  BTW why dont ask the authors for their code.  Its customary to either give this stuff out or post it on github. 

 

As a side note: Even a vanilla LSTM has many hyperparameters: number of states, activation type, number of variables to predict, test/train/validation breakdown, number of epochs, choice of k in k fold validation, size of batches, random seed, how they intialized weights (glorot, random nomal, variance scaling..) for each weight in the ANN, the use of pca or other methods to whiten data, momentum hyperparameter for hillclimbing, learning rate initialization, choice of optimizer...

 

My point is that even with a vanilla LSTM the author can pull more levers than can be hope to be reproduced if you don't know absolutely everything maybe even down to the version of python installed to reproduce the pseudorandom number generator.  No doubt some of these choices will be mentioned in the paper, but many of these choices won't be typically, which makes any reproduction difficult.  And typically the authors are the only ones who are incetivized to keep trying hyperparam configurations until one works. 

 

The real papers that are sucessful are typically methods where either its not impossible to get a reproducible and externally valid hyperparamter configuration, or something that is relatively robust to hyperaprameter choices. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...