How to debug low scoring submissions
My submissions keep scoring below 0.5 Sharpe. I've checked:
- No look-ahead bias (features use only past data)
- Predictions are normalized
- No NaN values
Any tips for diagnosing what goes wrong between local testing and platform scoring?
7 Replies
I'd recommend reading "Quantitative Portfolio Management" by Michael Isichenko. It's the best practical guide I've found.
The data quality in this competition is actually quite good compared to real-world datasets. In practice, you'd spend 60%+ of your time cleaning data.
This is a common pitfall. Make sure your features are computed before the prediction date, not on it. That's subtle look-ahead bias.
I disagree about the GARCH approach. In my experience, realized volatility estimators (like the Rogers-Satchell estimator) outperform parametric models.
Anyone else noticing that momentum factors have been working particularly well in the last month of competition data?
The documentation for the API is at /docs -- it's OpenAPI/Swagger format. Very helpful for understanding submission formats.