§ 01

The single-ticker primitive.

Every Markets tool starts the same way: fetch price history, get adjusted close. Adjustment is non-negotiable — it folds in dividends and splits so the return series is comparable across corporate actions. A $200 stock that splits 4-for-1 doesn't drop 75%; the unadjusted series says it did, and naive code computes a -75% daily return. This is how mispriced backtests are born.

# shipped surface — six tools, one primitive
analyze_stock(ticker, start, end="")         # price chart + trend summary
get_returns(ticker, start, end="")           # daily + cumulative
get_volatility(ticker, start, end="")        # annualized + 21-day rolling
get_risk_metrics(ticker, start, end="")      # Sharpe, drawdown, beta vs ^GSPC
compare_tickers(tickers, start, end="")      # normalized cumulative return chart
correlation_map(tickers, start, end="")      # pairwise return correlation heatmap

# shared adapter — the only place yfinance/Massive is touched
provider.fetch_price_history(ticker, start, end) -> DataFrame
provider.get_adjusted_prices(df) -> Series
§ 02

Returns — arithmetic, log, and the difference that kills models.

Daily arithmetic returns sum poorly across time but multiply correctly. Log returns sum correctly across time but break down for large moves. The library returns arithmetic for display (because that's what the user expects to see in percent), and computes log internally where compounding matters.

daily   = prices.pct_change().dropna()        # arithmetic
cum     = (1 + daily).cumprod() - 1             # compounded — what you actually earned
log_r   = np.log(prices / prices.shift(1)).dropna()  # used internally for vol
Design note

The naive sum of daily arithmetic returns is wrong. A +50% day followed by a −50% day leaves you at 75% of capital, not 100%. Anyone who has watched a quant intern present "average daily return × 252" has seen this go badly. The cumulative product is the only honest answer.

§ 03

Volatility — realized, rolling, and regime.

get_volatility returns two numbers: annualized standard deviation of daily log returns (a single point estimate over the window) and a 21-day rolling realized vol (a series, useful for spotting regime changes). One number lies; two numbers tell the story.

vol_annualized = log_returns.std() * np.sqrt(252)
vol_rolling    = log_returns.rolling(21).std() * np.sqrt(252)

Why √252? Variance scales linearly with time under the IID assumption; standard deviation scales with the square root. 252 is the conventional count of US trading days in a year. The IID assumption is wrong (returns cluster — vol begets vol — which is why GARCH exists), but for a quick read on a single name, √252 is the convention every desk uses. Picking your battles.

The rolling 21-day window matters more than the headline number. Tesla annualized at 24% over a year tells you nothing about whether the last six weeks have been 12% or 60%. The rolling series tells you whether you're inside or outside the regime when you're reading the report.

§ 04

Risk metrics, exactly as shipped.

Sharpe, max drawdown, and beta against ^GSPC. The full per-holding decomposition is covered in the Portfolio page; here is the math, in isolation.

def _compute_risk_metrics(returns, benchmark_returns) -> dict:
    sharpe = (returns.mean() / returns.std()) * np.sqrt(252)

    wealth       = (1 + returns).cumprod()
    max_drawdown = ((wealth - wealth.cummax()) / wealth.cummax()).min()

    aligned = pd.concat([returns, benchmark_returns], axis=1).dropna()
    cov     = np.cov(aligned.iloc[:, 0], aligned.iloc[:, 1])
    beta    = float(cov[0, 1] / cov[1, 1])

    return {"sharpe": float(sharpe), "max_drawdown": float(max_drawdown), "beta": beta}

"Sharpe is risk-adjusted return. Max drawdown is the loss you actually had to live with. Beta is what the market did to you. Three numbers, three different questions, three different audiences."

§ 05

What this page is not.