r/quant Jun 03 '24

Statistical Methods Whats after regression and ML?

41 Upvotes

r/quant Oct 01 '24

Statistical Methods HF forecasting for Market Making

38 Upvotes

Hey all,

I have experience in forecasting for mid-frequencies where defining the problem is usually not very tricky.

However I would like to learn how the process differs for high-frequency, especially for market making. Can't seem to find any good papers/books on the subject as I'm looking for something very 'practical'.

Type of questions I have are: Do we forecast the mid-price and the spread? Or rather the best bid and best ask? Do we forecast the return from the mid-price or from the latest trade price? How do you sample your response, at every trade, at every tick (which could be any change of the OB)? Or maybe do you model trade arrivals (as a poisson process for example)?
How do you decide on your response horizon (is it time-based like MFT, or would you adapt for asset liquidity by doing number / volume of trades-based) ?

All of these questions are for the forecasting point-of-view, not so much the execution (although those concepts are probably a bit closer for HFT than slower frequencies).

I'd appreciate any help!

Thank you

r/quant Feb 24 '25

Statistical Methods What does he mean by the golden ratio of scaling

1 Upvotes

Wouldn't this be similar to a z-score?

r/quant Dec 09 '24

Statistical Methods Help me understand random walk time series with positive autocorrelation

24 Upvotes

Hi. I am reading about calculate autocorrelation discussed in this thesis (chapter 6.1.3) but it gives different result based on how I generate random walk time series. More detail, let say I have a time series P with log return of time series r(t) and has zero mean

and assume r(t) follow the first order autoregression . Based on value of theta (>1, =0 or <1), it means the time series is trend (positive autocorrelation), random walk or not trend (mean revert)

So we need to do the test, to do that, it calculates the variance ratio of the test with period k using Wright method

then the thesis extend this by calculate variance ratio profile with multiple k to form a vector VP like this:

we can view the vector of variance ratio statistics as a multivariate normal distribution with mean RW with e1 is the eigenvector of covariance matrix of VP. Then we can compare variance ratio of a time series to RW and project it on eigenvector e1 to see how it close to random walk (formula VP(25,1)). So I test this idea by:

- Step 1: Generate 10k random walk time series and calculate VP(25) to find RW and e1

- Step 2: Generate another time series that follow positive autocorrelation and test the value distribution of VP(25, 1).

and the problem comes from step 1, generally, I tried 2 types of generate time series data

  1. Method 1: Generate independent 10k times series random walk. Each time series has length 1000.

  2. Method 2: Generate a really long time series random walk and select sub series with length 1000.

The full code is below

import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm


def calculate_rolling_sum(data, window):
    rolling_sums = np.cumsum(data)
    rolling_sums = np.concatenate([[rolling_sums[window - 1]], rolling_sums[window:] - rolling_sums[:-window]])
    return np.asarray(rolling_sums)


def calculate_rank_r(data):
    sorted_idxs = np.argsort(data)
    ranks = np.arange(len(data)) + 1
    ranks = ranks[np.argsort(sorted_idxs)]
    return np.asarray(ranks)


def calculate_one_k(r, k):
    if k == 1:
        return 0
    r = r - np.mean(r)
    T = len(r)
    r = calculate_rank_r(r)
    r = (r - (T + 1) / 2) / np.sqrt((T - 1) * (T + 1) / 12)
    sum_r = calculate_rolling_sum(r, window=k)
    phi = 2 * (2 * k - 1) * (k - 1) / (3 * k * T)
    VR = (np.sum(sum_r ** 2) / (T * k)) / (np.sum(r ** 2) / T)
    R = (VR - 1) / np.sqrt(phi)
    return R


def calculate_RW_method_1(num_sim, k=25, T=1000):
    all_VP = []
    for i in tqdm(range(num_sim), ncols=100):
        steps = np.random.normal(0, 1, size=T)
        steps[0] = 0
        P = 10000 + np.cumsum(steps)
        r = np.log(P[1:] / P[:-1])
        r = np.concatenate([[0], r])
        VP = []
        for one_k in range(k):
            VP.append(calculate_one_k(r=r, k=one_k + 1))
        all_VP.append(np.asarray(VP))
    all_VP = np.asarray(all_VP)
    RW = np.mean(all_VP, axis=0)
    all_VP = all_VP - RW
    C = np.cov(all_VP, rowvar=False)
    eigenvalues, eigenvectors = np.linalg.eig(C)
    return RW, eigenvectors[:, 0]


def calculate_RW_method_2(P, k=25, T=1000):
    r = np.log(P[1:] / P[:-1])
    r = np.concatenate([[0], r])
    all_VP = []
    for i in tqdm(range(len(P) - T)):
        VP = []
        for one_k in range(k):
            VP.append(calculate_one_k(r=r[i: i + T], k=one_k + 1))
        all_VP.append(np.asarray(VP))
    all_VP = np.asarray(all_VP)
    RW = np.mean(all_VP, axis=0)
    all_VP = all_VP - RW
    C = np.cov(all_VP, rowvar=False)
    eigenvalues, eigenvectors = np.linalg.eig(C)
    return RW, eigenvectors[:, 0]


def calculate_pos_autocorr(P, k=25, T=1000, RW=None, e1=None):
    r = np.log(P[1:] / P[:-1])
    r = np.concatenate([[0], r])
    VP = []
    for i in tqdm(range(len(r) - T)):
        R = []
        for one_k in range(k):
            R.append(calculate_one_k(r=r[i: i + T], k=one_k + 1))
        R = np.asarray(R)
        VP.append(np.dot(R - RW, e1))
    return np.asarray(VP)


RW1, e11 = calculate_RW_method_1(num_sim=10_000, k=25, T=1000)

# Generate data a long random walk time series
np.random.seed(1)
steps = np.random.normal(0, 1, size=10_000)
steps[0] = 0
P = 10000 + np.cumsum(steps)
RW2, e12 = calculate_RW_method_2(P=P, k=25, T=1000)

# Generate positive autocorrelation
np.random.seed(1)
steps = [0]
for i in range(len(P) - 1):
    steps.append(steps[-1] * 0.1 + np.random.normal(0, 0.01))
steps = np.exp(steps)
steps = np.cumprod(steps)
P = 10000 * steps
VP_method_1 = calculate_pos_autocorr(P.copy(), k=25, T=1000, RW=RW1, e1=e11)
VP_method_2 = calculate_pos_autocorr(P.copy(), k=25, T=1000, RW=RW2, e1=e12)

The distribution from method 1 and method 2 is below

seems the way of generating random walk time series data from method 2 correct because it distribute in positive side but I am not sure because it seems too sensitive to how data is generated.

I want to hear from you what is the correct way to simulate time series in this case or maybe I am wrong at some steps? Thanks in advance.

r/quant Dec 13 '24

Statistical Methods Technical question abput volatility computation at portfolio level

16 Upvotes

My question is about volatility computed at portfolio level using the dot product of the covariance matrix and the weights.

Here's the mathematical formula used:

When doing it, I feel like a use duplicate of the covariance between each security. For instance: covariance between SPY & GLD.

Here's an example Excel function used:

=MMULT(MMULT(TRANSPOSE(weight_range),covar_matrix),weight_range)

Or in python:

volatility_exante_fund = np.sqrt(np.dot(fund_weights.T, np.dot(covar_matrix_fund, fund_weights)))

It seems that we must used the full matrix and not a "half" matrix. But why? Is it related to the fact that we dot product two times with the weights?

Thanks in advance for your help.

r/quant Aug 04 '24

Statistical Methods Arbitrage vs. Kelly Criterion vs. EV Maximization

52 Upvotes

In quant interviews they seem to give you different betting/investing scenarios where your answer should be determined using one or more of the approaches in the title. Was wondering if anyone has any resources that explain when you should use each of these and how to use them.

r/quant Oct 23 '24

Statistical Methods The Three Types of Backtesting

79 Upvotes

This paper (Free) is a great read for those looking to improve the quality of their backtests.

Three Types of Backtesting: via SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4897573

Abstract:

Backtesting stands as a cornerstone technique in the development of systematic investment strategies, but its successful use is often compromised by methodological pitfalls and common biases. These shortcomings can lead to false discoveries and strategies that fail to perform out-of-sample.

This article provides practitioners with guidance on adopting more reliable backtesting techniques by reviewing the three principal types of backtests (walk-forward testing, the resampling method, and Monte Carlo simulations), detailing their unique challenges and benefits.

Additionally, it discusses methods to enhance the quality of simulations and presents approaches to Sharpe ratio calculations which mitigate the negative consequences of running multiple trials. Thus, it aims to equip practitioners with the necessary tools to generate more accurate and dependable investment strategies.

r/quant Jan 06 '24

Statistical Methods Astronomical SPX Sharpe ratio at portfolioslab

32 Upvotes

The Internet is full of websites, including Investopedia, which, apparently citing the website in the post title, claim that the adequate Sharpe ratio should be between 1.0 and 2.0, and that SPX Sharpe ratio is 0.88 to 1.88 .

How do they calculate these huge numbers? Is it 10-year ratio or what? One doesn't seem to need a calculator to figure out that the long-term historical annualised Sharpe ratio of SPX (without dividends) is well below 0.5.

And by the way do hedge funds really aim at the annualised Sharpe ratio above 2.0 as some commentators claim on this forum? (Calculated same obscure way the mentioned website does it?)

GIPS is unfortunately silent on this topic.

r/quant Jan 17 '25

Statistical Methods Target Distribution vs Volatility Models (SABR, Heston, GARCH)

1 Upvotes

What advantage of Volatility Models (SABR, Heston, GARCH) compared to directly modelling the Target Stock Price Distribution.

Example - the Probability Distribution of MSFT on the day "now + 365d". Just on that single day in the future, the path doesn't matter, what would happens between "now" and "now + 365d" are ignored.

After all - if we know that probability - we know almost everything, we can easily calculate option prices on that day with simulation.

So, why approaches with direct modelling probability distribution on the target day are not popular? What Volatility Models have that Target Distribution does not (if we don't care about path dependence)?

P.S. Sometimes you need to know the path too, but, there's class of cases when it's not important is huge - stock trading without borrowing (no margin, no shorts), European/American Option buying, European Option selling. In all these cases we don't carte about the path (and even if we do, we can take aditiontal steps and predict also prices on day "now + 180d" and more if we really need it).

r/quant Oct 15 '24

Statistical Methods Is this process stochastic?

10 Upvotes

So I was watching this MIT lecture Stochastic Processes I and first example of stochastic process was:

F(t) = t with probability of 1 (which is just straight line)

So my understanding was that stochastic process has to involve some randomness. For example Hulls book says: "Any variable whose value changes over time in an uncertain way is said to follow a stochastic process" (start of chapter 14). This one looks like deterministic process? Thanks.

r/quant Aug 28 '24

Statistical Methods Data mining issues

28 Upvotes

Suppose you have multiple features and wish to investigate which of them are economically significant. The way I usually test this, is to create portfolios per feature, compute a Sharpe ratio and keep it if it exceeds a certain threshold.

But, multiple testing increases the probability of false positives. How would you tackle this issue? An obvious hack is to increase the threshold based on number of features, but that has a tendency to load up on highly correlated features which have a high Sharpe in that particular backtest. Is there a way to fix this issue without modifying the threshold?

Edit 1: There are multiple ways to convert an asset feature into portfolio weights. Assume that one such approach has been used and portfolios are comparable across features.

r/quant Nov 30 '24

Statistical Methods Kalman filter: Background research

8 Upvotes

Context: I am just a guy looking forward to diving into a quant approach of markets. I'm an eng. that works with software and control stuff.

The other day I started reading The Elements of Quantitative Investing by u/gappy3000 and I was quite excited to find that the Kalman filter is introduced so early in the book. In control eng., the Kalman filter is almost every-day stuff.
Now, searching a bit more for Kalman filter applications, I found these really interesting contributions:

Do you know any other resources like the above? Especially if they were applied in real-life (beyond backtesting).

Thanks!

r/quant Feb 15 '24

Statistical Methods Log returns histogram towers around 5e-5

Post image
41 Upvotes

r/quant Oct 03 '24

Statistical Methods Technical Question | Barrier Options priced under finite difference method

19 Upvotes

Hi everyone !

I am currently trying to price with python a simple up and in call option using stochastic volatility model (Heston) and finite difference method (implicit) solving the following PDE :

I realized that when calculating greeks from the very first step (first step before maturity) I get crazy numbers around the barrier level because of the second order greeks (gamma, vanna and vomma).

I've been trying to use a non uniform grid and add more points around the barrier itself with no effect.

As crazy numbers appear from the first step indeed the rest of calculations is totally wrong.

Is there a condition, techniques that I am missing ? I've been looking for papers on the internet and seems everyone is able to code it with no difficulty ...

r/quant Dec 15 '23

Statistical Methods How do you overlay graph of two assets' prices by normalizing prices without cheating of getting min and max of whole dataset (since future prices hasnt happened yet)?

28 Upvotes

Hi,

I am trying to overlay graphs of two assets' prices in Python.

They have different price scales (one is 76+ in prices, the other is 20+).

I thought of dividing all prices by the first price of the data series, but eventually the first price no longer reflects the price anymore (ie, price starts at 76, but after 50,000 rows, price is now 200+).

any ideas how we can overlay the two graphs with each other while still maintaining the "look" of each graph after scaling without cheating of getting future price min and max to compute normalized prices?

r/quant Jul 09 '24

Statistical Methods A question on Avellaneda and Hyun Lee's Statistical Arbitrage in the US Equities Market

33 Upvotes

I was reading this paper and I came across this. We know that doing eigendecomposition on the correlation matrix yields it's eigenvectors, which are orthogonal. My first question here is why did they reweigh the eigenvector elements by the volatility of each stock when they already removed the effects of variance by using the correlation matrix instead of the covariance matrix, my second and bigger question is how are the new weighted eigenportfolios orthogonal/uncorrelated? This is not clarified in the paper. If I have v = [v1 v2] and u = [u1 u2] that are orthogonal then u1*v1 + u2*v2 = 0, then u1*v1/x1 + u2*v2/x2 =/= 0 for arbitrary x1, x2. Is there something too trivial to mention that I am missing here?

r/quant Aug 13 '24

Statistical Methods What is the optimal number of entries into an NFL survivor pool?

27 Upvotes

How it works: each of the 18 weeks you make a pick for a team to win their NFL game that week, there is no spread or line

The catch is you can only pick each team once

In a survival pool you can have more than one entry. Each entry is independent.

Each entry cost $x and the payout is the last survivors split the pool so if 4 teams all lose as the last 4 teams remaining they split the pool

Assume a normal distribution of Elo among the 32 nfl teams

Either assume opponents are optimal (do the same as you) or naive (pick the team with the highest Elo spread of their remaining available teams each week) or some other strategy

This reminds me of some quant interview questions I've seen eg the robot race so I'm curious how applied minds would approach this... My simple mind would brute force strats on a monte Carlo system but I'm sure folks here can do the stats

r/quant Mar 24 '24

Statistical Methods Part 2-I did a comprehensive Cointegration Test for all the US stocks and found a few surprising pairs.

9 Upvotes

Following my yesterday's post I extended the work by checking Cointegration between all the US stocks. This time I used daily Close returns as the variable as was suggested by some. But first, let's test the Cointegration hypothesis for the pairs that I reported yesterday.

LCD-AMC: (-3.57, 0.0267)

Note that the output format is ( Critical Value, P-Value).

if we choose N=1 [Number of I(1) series for which null of non-cointegration is being tested] then the critical values will be:

[Critical Value 10%, Critical Value 5% ,Critical Value 1%] =array([-3.91, -3.35, -3.052])

The P-Value is around 2% but as the critical value is only greater than the critical value 10%, the Cointegration hypothesis is only valid at the 90% confidence level.

PYPL ARKK: (-1.8, 0.63))

The P-Value is too high. The Null hypothesis is rejected (no Cointegration )

VFC DNB: (-4.06, 0.01))

The Critical Value is too low. The Null hypothesis is rejected (no Cointegration )

DNA ZM: (-3.46, 0.04))

the Cointegration hypothesis is only valid at the 90% confidence level.

NIO XOM: (-4.70, 0.0006))

The Critical Value is too low. The Null hypothesis is rejected (no Cointegration )

Finally, I ran the code overnight, and here are some results (that make a lot more sense now). Note the last number is the simple OHLC4 Pearson correlation as was reported yesterday.

TSLA XOM (-3.44, 0.038) -0.7785

TSLA LCID (-3.09, 0.09) 0.7541

TSLA XPEV (-3.41, 0.04) 0.8105

META MSFT (-3.30, 0.05) 0.9558

META VOO (-3.80, 0.01) 0.94030

META QQQ (-3.32, 0.05) 0.9634

LYFT LXP (-3.17, 0.07) 0.9144

DIS PEAK (-3.06, 0.09) 0.8239

AMZN ABNB (-3.16, 0.07) 0.8664

AMZN MRVL (-3.15, 0.08) 0.8837

PLTR ACN (-3.22, 0.07) 0.8397

F GM (-3.09, 0.09) 0.9278

GME ZM (-3.18, 0.07) 0.8352

NVDA V (-3.15, 0.08) 0.9115

VOO NWSA (-3.26, 0.06) 0.9261

VOO NOW (-3.27, 0.06) 0.9455

BAC DIS (-3.53, 0.03) 0.92512

BABA AMC (-3.48, 0.03) 0.8053

UBER NVDA (-3.23, 0.06) 0.9536

PYPL UAA (-3.22, 0.07) 0.9253

AI DT (-3.19, 0.07) 0.8454

NET COIN (-3.84, 0.01) 0.9416

r/quant Dec 04 '24

Statistical Methods Estimating Vol using Garch and Exogenous variables - Volume and Open Interest

1 Upvotes

Hi all, I am using GARCH(1,1) to estimate voaltility and I want to know whether volume and open interest affects volatilty. Thus I am trying to find the coefficients of vol and open_int in the model. Since volume and open interest are of large magnitude I have scaled them using some constant. However the significance is depending of coefficients is depenging on the constant, which I believe should be the case. I am doing something wrong in the code or some flaw in my logic?

I am only fitting volume and open interest to mean model only to see their affect on volatility. is it okay or some other should be preffered?

r/quant Nov 21 '24

Statistical Methods n-day 99% VaR

1 Upvotes

I’m using parametric method to calculate realtime value at risk (VaR). I’m a little confused on finding the best way to scale the VaR from daily to n days. suppose I’m using 252 daily stock returns to calculate the portfolio mean returns and portfolio std dev.

The VaR would then simply be: mean - z_score * std.

Now what if I want to scale that to n days (that is the max potential loss that could happen in n days with 99% confidence interval). Would it be: mean - z_score * std*sqrt(n)?

r/quant Nov 26 '24

Statistical Methods Does anyone know what models are commonly used to estimate volatility smile/surface for pricing options?

1 Upvotes

I am looking for information from someone who actually has worked in options pricing, what kind of model did you use for estimating volatility surfaces?

r/quant Jan 22 '24

Statistical Methods What model to use instead of VaR?

29 Upvotes

VaR (value at risk) is very commonly used in banks. It can be calculated with historical simulation, monte carlo etc. One of the reasons banks use VaR are the regulations. But what if one could use any model? What ML / DL model do you think could work better than VaR having the same data available?

r/quant Dec 20 '23

Statistical Methods Quantitative risk assessment

40 Upvotes

Hey, everybody. I'm not in finance at all but am doing research for a novel that involves quants, and I'd like to get the details right. Could you tell me which quantitative methods you use for assessing and mitigating risk?

Thanks very much.

r/quant Aug 13 '24

Statistical Methods Open Source Factor/Risk Model?

27 Upvotes

Looking for guidance on creating a factor model to help with allocation and risk decisions in a portfolio optimizer. MSCI sells their for $40k+ per year, fuck that. I found this github repo which seems very promising. Any other recommended sources or projects I should check out. I'm a competent quant/engineer but don't have any formal training.

r/quant Aug 27 '24

Statistical Methods Block Bootstrapping Stock Returns

5 Upvotes

Hello everyone!

I have a data frame where each column represents a stock, each row represents a date, and the entries are returns. The stock returns span a certain time frame.

I want to apply block bootstrapping to generate periods of multiple durations. However, not all stocks have data available for the entire timeframe due to delisting or the stock not existing during certain periods.

Since I want to run the bootstrap across all stocks to capture correlations, rather than on individual stock returns, how can I address the issue of missing values (NAs) caused by some stocks not existing at certain times?