Back to Community

Help: RuntimeError in submission container

SHSharpeShooter
Jan 25, 2026
2,223 views
17 posts
help
technical
submission

Getting this error in my submission logs:

RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB

Is GPU available in the submission environment? The docs aren't clear. If not, how should I handle model inference that was trained on GPU?

Edit: Solved! Need to use model.to('cpu') before inference in the submission script.

16 Replies

36
DADataSciProFeb 11, 2026

Great discussion! This is why I love this community - knowledge sharing makes everyone better.

7
BUBugHunter996d ago

I've found that sector neutrality is a key factor in the scoring. Strategies that are long one sector and short another tend to underperform.

39
FAFactorZooFeb 24, 2026
import numpy as np

from scipy import optimize

def max_sharpe_portfolio(returns, rf=0.0): n = returns.shape[1] init_w = np.ones(n) / n bounds = [(0.0, 0.1)] * n constraints = {'type': 'eq', 'fun': lambda w: np.sum(w) - 1.0} result = optimize.minimize( lambda w: -(np.mean(returns @ w) - rf) / np.std(returns @ w), init_w, bounds=bounds, constraints=constraints ) return result.x

Here's a simple max-Sharpe optimizer for reference.

11
VOVolTraderFeb 23, 2026
edited

One thing to watch out for: survivorship bias in the training data. Make sure you include delisted securities.

Post a Reply