Q1 2026 Momentum Challenge -- Discussion Thread
Official discussion thread for the Q1 2026 Momentum Challenge competition.
Dataset covers US large-cap equities, daily frequency. Remember: predictions are evaluated on next-day returns.
Feel free to discuss approaches here but please don't share exact code from your submissions.
46 Replies
import numpy as np
from scipy import optimize
def max_sharpe_portfolio(returns, rf=0.0):
n = returns.shape[1]
init_w = np.ones(n) / n
bounds = [(0.0, 0.1)] * n
constraints = {'type': 'eq', 'fun': lambda w: np.sum(w) - 1.0}
result = optimize.minimize(
lambda w: -(np.mean(returns @ w) - rf) / np.std(returns @ w),
init_w, bounds=bounds, constraints=constraints
)
return result.x
Here's a simple max-Sharpe optimizer for reference.
Can confirm this approach works. I implemented something similar and jumped from rank 150 to rank 23 in two weeks.
import numpy as np
from scipy import optimize
def max_sharpe_portfolio(returns, rf=0.0):
n = returns.shape[1]
init_w = np.ones(n) / n
bounds = [(0.0, 0.1)] * n
constraints = {'type': 'eq', 'fun': lambda w: np.sum(w) - 1.0}
result = optimize.minimize(
lambda w: -(np.mean(returns @ w) - rf) / np.std(returns @ w),
init_w, bounds=bounds, constraints=constraints
)
return result.x
Here's a simple max-Sharpe optimizer for reference.
Has anyone tried using attention mechanisms for this? The temporal attention weights could tell you which historical periods are most relevant.
Has anyone tried using attention mechanisms for this? The temporal attention weights could tell you which historical periods are most relevant.
For those new to the platform: start with the tutorial competition. It has a smaller dataset and more forgiving scoring.
Interesting thread! I've been exploring reinforcement learning for portfolio allocation. The challenge is defining the right reward function.
One more thing: the scoring engine uses a held-out test period that you never see. So your validation score is the best you can do.
This is a common pitfall. Make sure your features are computed before the prediction date, not on it. That's subtle look-ahead bias.
One thing to watch out for: survivorship bias in the training data. Make sure you include delisted securities.
Good point about overfitting. My rule of thumb: never trust a backtest with fewer than 500 observations in the out-of-sample period.
The competition scoring docs could definitely be clearer. I spent 2 hours debugging what turned out to be a normalization issue.
Interesting thread! I've been exploring reinforcement learning for portfolio allocation. The challenge is defining the right reward function.
Anyone else noticing that momentum factors have been working particularly well in the last month of competition data?
The data quality in this competition is actually quite good compared to real-world datasets. In practice, you'd spend 60%+ of your time cleaning data.
Good point about overfitting. My rule of thumb: never trust a backtest with fewer than 500 observations in the out-of-sample period.
For factor models, I'd strongly recommend the Fama-French 5-factor model as a starting point. It captures most systematic risk.
Thanks for sharing! This is exactly the kind of insight that helps the community grow. Bookmarking this thread.
I've found that sector neutrality is a key factor in the scoring. Strategies that are long one sector and short another tend to underperform.
The data quality in this competition is actually quite good compared to real-world datasets. In practice, you'd spend 60%+ of your time cleaning data.
This is a common pitfall. Make sure your features are computed before the prediction date, not on it. That's subtle look-ahead bias.
I'd recommend reading "Quantitative Portfolio Management" by Michael Isichenko. It's the best practical guide I've found.
Thanks for sharing! This is exactly the kind of insight that helps the community grow. Bookmarking this thread.