Help: RuntimeError in submission container
Getting this error in my submission logs:
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB
Is GPU available in the submission environment? The docs aren't clear. If not, how should I handle model inference that was trained on GPU?
Edit: Solved! Need to use model.to('cpu') before inference in the submission script.
16 Replies
Great discussion! This is why I love this community - knowledge sharing makes everyone better.
I've found that sector neutrality is a key factor in the scoring. Strategies that are long one sector and short another tend to underperform.
import numpy as np
from scipy import optimize
def max_sharpe_portfolio(returns, rf=0.0):
n = returns.shape[1]
init_w = np.ones(n) / n
bounds = [(0.0, 0.1)] * n
constraints = {'type': 'eq', 'fun': lambda w: np.sum(w) - 1.0}
result = optimize.minimize(
lambda w: -(np.mean(returns @ w) - rf) / np.std(returns @ w),
init_w, bounds=bounds, constraints=constraints
)
return result.x
Here's a simple max-Sharpe optimizer for reference.
One thing to watch out for: survivorship bias in the training data. Make sure you include delisted securities.