Documentation

Strategy Authoring Guide

Define trading strategies using the manifoldbt Python DSL. Strategies compile into vectorized expression graphs executed by the Rust engine.

Installation

Requirements

Python 3.9+ on Linux, macOS, or Windows.

Install

shell
pip install manifoldbt

For plotting support (matplotlib):

shell
pip install manifoldbt[plot]

Verify

python
import manifoldbt as mbt
print(mbt.license_info())

Output:

("Community", None)

Data Store

Most users get a DataStore directly from mbt.ingest() (see Data Ingestion). If you already have Arrow IPC files on disk, you can create one manually:

python
store = mbt.DataStore(
    data_root="data",
    metadata_db="metadata/metadata.sqlite",
    arrow_dir="data/mega",
)

Quick Start

A strategy is a Python script that imports manifoldbt, defines indicators and signals as expression objects, builds a Strategy, configures the backtest, and calls mbt.run().

python
import manifoldbt as mbt
from manifoldbt.indicators import close, ema
from manifoldbt.helpers import time_range, Slippage, Interval

# -- Indicators
fast = ema(close, 10)
slow = ema(close, 25)

# -- Strategy
signal = mbt.when(fast > slow, 1.0, 0.0)

strategy = (
    mbt.Strategy.create("ema_crossover")
    .signal("fast", fast)
    .signal("slow", slow)
    .size(signal)
)

# -- Config
start, end = time_range("2021-01-01", "2026-01-01")

config = mbt.BacktestConfig(
    universe={"binance": ["BTCUSDT"]},
    time_range_start=start,
    time_range_end=end,
    bar_interval=Interval.hours(1),
    initial_capital=10_000,
    fees=mbt.FeeConfig.binance_perps(),
    slippage=Slippage.fixed_bps(2),
    warmup_bars=25,
)

# -- Run (ingest downloads data and returns a DataStore)
store = mbt.ingest(provider="binance", symbol="BTCUSDT", symbol_id=1,
                   interval="1h", start="2021-01-01T00:00:00Z", end="2026-01-01T00:00:00Z")
result = mbt.run(strategy, config, store)
print(result.summary())
mbt.plot.tearsheet(result, show=True)

Data Ingestion

Use mbt.ingest() to download bar data from supported providers directly into your local store. Returns a DataStore ready for backtesting.

Binance / Hyperliquid

python
# Single symbol
store = mbt.ingest(
    provider="binance",
    symbol="BTCUSDT",
    symbol_id=1,
    start="2020-01-01T00:00:00Z",
    end="2026-01-01T00:00:00Z",
)

# Multiple symbols (sequential, with per-symbol progress bar)
store = mbt.ingest(
    provider="binance",
    symbols=[("BTCUSDT", 1), ("ETHUSDT", 2), ("SOLUSDT", 3)],
    start="2020-01-01T00:00:00Z",
    end="2026-01-01T00:00:00Z",
)

result = mbt.run(strategy, config, store)

Ingestion fetches data with 8 concurrent workers, stores as per-symbol Arrow IPC files, and merges with existing data automatically (dedup by timestamp).

dYdX

dYdX v4 perpetual futures via the Indexer API. No API key required. Supports candles (1m–1d), trades, and funding rates.

python
store = mbt.ingest(
    provider="dydx",
    symbol="BTC-USD",
    symbol_id=1,
    start="2024-01-01T00:00:00Z",
    end="2025-01-01T00:00:00Z",
    asset_class="crypto_perp",
)

# dYdX uses dash-separated tickers: BTC-USD, ETH-USD, SOL-USD
# Supported intervals: 1m, 5m, 15m, 30m, 1h, 4h, 1d

Bitstamp

Bitstamp spot market data via the public REST API v2. No API key required. Supports OHLC candles (1m–1d) and recent trades.

python
store = mbt.ingest(
    provider="bitstamp",
    symbol="BTCUSD",
    symbol_id=1,
    start="2024-01-01T00:00:00Z",
    end="2025-01-01T00:00:00Z",
    asset_class="crypto_spot",
)

# Bitstamp uses concatenated tickers: BTCUSD, ETHEUR, XRPUSD
# Supported intervals: 1m, 3m, 5m, 15m, 30m, 1h, 2h, 4h, 1d

Databento Pro

Requires a Pro license and a DATABENTO_API_KEY environment variable. Activate once, the key is saved to disk and reloaded automatically on future imports.

python
# First time only
mbt.activate("YOUR_PRO_KEY")

# Then use directly
store = mbt.ingest(
    provider="databento",
    symbol="ESH5",
    symbol_id=1,
    start="2025-01-01T00:00:00Z",
    end="2025-01-31T00:00:00Z",
    dataset="GLBX.MDP3",
    exchange="CME",
    asset_class="future",
)

Massive Pro

Requires a Pro license and a MASSIVE_API_KEY environment variable. Covers stocks, ETFs, futures, options, forex, indices, and crypto.

python
store = mbt.ingest(
    provider="massive",
    symbol="AAPL",
    symbol_id=1,
    start="2025-01-01T00:00:00Z",
    end="2025-03-01T00:00:00Z",
    exchange="MASSIVE",
    asset_class="equity",
)

CLI

Data can also be ingested from the command line:

shell
manifoldbt ingest --provider binance --symbol BTCUSDT --symbol-id 1 \
    --start 2025-01-01T00:00:00Z --end 2025-03-01T00:00:00Z

Parameters

ParameterRequiredDefaultDescription
provideryes"binance", "hyperliquid", "dydx", "bitstamp", "databento" Pro, "massive" Pro
symbol*Ticker symbol (e.g. "BTCUSDT"). Use with symbol_id
symbol_id*Unique integer ID for the symbol. Use with symbol
symbols*List of (ticker, id) tuples for multi-symbol ingest
startyesRFC3339 start timestamp
endyesRFC3339 end timestamp
interval"1m"Bar interval: 1s, 1m, 5m, 15m, 30m, 1h, 4h, 1d
datasetdatabento onlyDatabento dataset (e.g. "GLBX.MDP3")
data_root"data"Output directory for Arrow IPC files
progressTrueShow rich progress bar (requires rich)
exchangeprovider nameExchange name for metadata
asset_class"crypto_spot"crypto_spot, crypto_perp, equity, future, forex

Indicators

All indicators live in manifoldbt.indicators and return Expr objects that compose into the expression graph. Pre-built column references: open, high, low, close, volume, vwap, timestamp.

Moving Averages

sma(source: Expr, period: int | param) -> Expr

Simple Moving Average. Period can be a literal int or a param() for sweeps.

ArgumentTypeDefaultDescription
sourceExpr--Input series (e.g. close)
periodint | param--Lookback window
ema(source: Expr, span: float | int | param) -> Expr

Exponential Moving Average (span-based, alpha = 2/(span+1)).

ArgumentTypeDefaultDescription
sourceExpr--Input series
spanfloat | int | param--EMA span (converted to float internally)
dema(source: Expr, period: int = 14) -> Expr

Double Exponential Moving Average.

tema(source: Expr, period: int = 14) -> Expr

Triple Exponential Moving Average.

wma(source: Expr, period: int = 14) -> Expr

Weighted Moving Average.

hma(source: Expr, period: int = 14) -> Expr

Hull Moving Average.

kama(source: Expr, period: int = 10) -> Expr

Kaufman Adaptive Moving Average.

python
from manifoldbt.indicators import close, sma, ema, dema, hma

fast = ema(close, 10)
slow = sma(close, 60)
hull = hma(close, 20)
double = dema(close)  # period defaults to 14

Momentum

rsi(source: Expr, period: int = 14) -> Expr

Relative Strength Index (Wilder's smoothing, single-pass O(n)). Returns values in [0, 100].

ArgumentTypeDefaultDescription
sourceExpr--Input price series
periodint | param14Lookback window
roc(source: Expr, period: int = 1) -> Expr

Rate of Change.

momentum(source: Expr, period: int = 1) -> Expr

Raw price difference (source - source.lag(period)).

macd(source: Expr, fast_period: int = 12, slow_period: int = 26, signal_period: int = 9) -> Tuple[Expr, Expr, Expr]

Moving Average Convergence Divergence. Returns a 3-tuple: (macd_line, signal_line, histogram).

ArgumentTypeDefaultDescription
sourceExpr--Input price series
fast_periodint12Fast EMA span
slow_periodint26Slow EMA span
signal_periodint9Signal line EMA span
stoch_k(period: int = 14) -> Expr

Stochastic %K oscillator (native Rust, uses high/low/close columns).

adx(period: int = 14) -> Expr

Average Directional Index (native Rust, uses high/low/close).

cci(period: int = 20) -> Expr

Commodity Channel Index (native Rust, uses high/low/close).

williams_r(period: int = 14) -> Expr

Williams %R oscillator (native Rust, uses high/low/close).

python
from manifoldbt.indicators import close, rsi, macd, adx

my_rsi = rsi(close, 14)
macd_line, signal_line, histogram = macd(close)
trend_strength = adx(14)

Volatility

atr(period: int = 14) -> Expr

Average True Range (Wilder's smoothing, single-pass O(n)). Uses high, low, close columns.

true_range() -> Expr

True Range (single bar, uses high/low/close).

natr(period: int = 14) -> Expr

Normalized ATR (ATR as percentage of close, uses high/low/close).

bollinger_bands(source: Expr, period: int = 20, num_std: float = 2.0) -> Tuple[Expr, Expr, Expr]

Bollinger Bands (native Rust). Returns a 3-tuple: (upper, middle, lower).

ArgumentTypeDefaultDescription
sourceExpr--Input price series
periodint20SMA lookback window
num_stdfloat2.0Number of standard deviations
bollinger_width(source: Expr, period: int = 20, num_std: float = 2.0) -> Expr

Bollinger Bandwidth (upper - lower, normalized).

keltner_channels(period: int = 20, multiplier: float = 1.5) -> Tuple[Expr, Expr, Expr]

Keltner Channels (native Rust, uses high/low/close). Returns (upper, middle, lower).

ArgumentTypeDefaultDescription
periodint20EMA and ATR lookback window
multiplierfloat1.5ATR multiplier for channel width
supertrend(period: int = 10, multiplier: float = 3.0) -> Expr

SuperTrend indicator (native Rust, uses high/low/close).

python
from manifoldbt.indicators import close, atr, bollinger_bands, keltner_channels

vol = atr(14)
upper, middle, lower = bollinger_bands(close, 20, 2.0)
k_upper, k_mid, k_lower = keltner_channels(20, 1.5)

Volume

obv(source: Expr = None, vol: Expr = None) -> Expr

On-Balance Volume. Defaults to close and volume columns.

vwap() -> Expr

Volume Weighted Average Price (uses high/low/close/volume).

ad_line() -> Expr

Accumulation/Distribution Line (uses high/low/close/volume).

mfi(period: int = 14) -> Expr

Money Flow Index (uses high/low/close/volume).

Crossover Signals

crossover(a: Expr, b: Expr) -> Expr

True on bars where a crosses above b.

crossunder(a: Expr, b: Expr) -> Expr

True on bars where a crosses below b.

Linear Regression

linreg_slope(source: Expr, window: int) -> Expr

Rolling linear regression slope (single-pass O(n)).

linreg_value(source: Expr, window: int) -> Expr

Predicted y at the last point of the rolling window.

linreg_r2(source: Expr, window: int) -> Expr

Coefficient of determination R-squared in [0, 1] (single-pass O(n)).

Trend

parabolic_sar(af_start: float = 0.02, af_max: float = 0.2) -> Expr

Parabolic SAR (native Rust, uses high/low).

Math Helpers

abs_val(x: Expr) -> Expr
sqrt(x: Expr) -> Expr
log(x: Expr) -> Expr
exp(x: Expr) -> Expr
max_val(a: Expr, b: Expr) -> Expr
min_val(a: Expr, b: Expr) -> Expr

Element-wise math operations: absolute value, square root, natural log, exponential, max, min.

Datetime Extraction

hour(source: Expr = None) -> Expr       # 0-23 UTC
minute(source: Expr = None) -> Expr     # 0-59
day_of_week(source: Expr = None) -> Expr # 0=Mon, 6=Sun
month(source: Expr = None) -> Expr      # 1-12
day_of_month(source: Expr = None) -> Expr # 1-31

Extract datetime components from a timestamp column. All default to the bar timestamp column.

python
from manifoldbt.indicators import hour, day_of_week

# Trade only during US equity hours (14:30-21:00 UTC)
us_hours = (hour() >= 14) & (hour() < 21)
# Skip weekends
is_weekday = day_of_week() < 5

Filters (Scan-based)

kalman(source: Expr = None, q: float = 1e-5, r: float = 1e-2) -> Expr

1-D Kalman filter (constant-velocity model). Uses the scan primitive -- runs entirely in Rust as a flat scalar VM.

ArgumentTypeDefaultDescription
sourceExprcloseInput price series
qfloat1e-5Process noise covariance
rfloat1e-2Measurement noise covariance
garch(source: Expr = None, omega: float = 1e-6, alpha: float = 0.1, beta: float = 0.85) -> Expr

GARCH(1,1) conditional volatility estimator. Defaults to close.pct_change(1) for the return series. Returns conditional standard deviation.

ArgumentTypeDefaultDescription
sourceExprclose.pct_change(1)Return series
omegafloat1e-6Long-run variance weight
alphafloat0.1Weight on lagged squared return (ARCH term)
betafloat0.85Weight on lagged conditional variance (GARCH term)

Statistics

rolling_median(source: Expr, window: int) -> Expr

Rolling median (native Rust).

Expr Method Chaining

Every Expr exposes chainable methods for rolling computations. All methods return a new Expr.

MethodSignatureDescription
.lag(n)lag(n: int) -> ExprValue at t - n
.lead(n)lead(n: int) -> ExprValue at t + n
.diff(n)diff(n: int = 1) -> Exprx[t] - x[t-n]
.pct_change(n)pct_change(n: int = 1) -> ExprFractional change
.rolling_mean(w)rolling_mean(window: int) -> ExprRolling mean (SMA)
.rolling_std(w)rolling_std(window: int) -> ExprRolling standard deviation
.rolling_sum(w)rolling_sum(window: int) -> ExprRolling sum
.rolling_min(w)rolling_min(window: int) -> ExprRolling minimum
.rolling_max(w)rolling_max(window: int) -> ExprRolling maximum
.rolling_median(w)rolling_median(window: int) -> ExprRolling median
.ewm_mean(s)ewm_mean(span: float) -> ExprExponentially weighted mean
.zscore(w)zscore(window: int) -> Expr(x - mean) / std over window
.rsi(p)rsi(period: int = 14) -> ExprRelative Strength Index
.cumsum()cumsum() -> ExprCumulative sum
.cumprod()cumprod() -> ExprCumulative product
.rank()rank() -> ExprExpanding rank
.cs_mean()cs_mean() -> ExprCross-sectional mean (multi-asset)
.cs_rank()cs_rank() -> ExprCross-sectional rank (multi-asset)
.cross_above(other)cross_above(other: Expr) -> ExprTrue when self crosses above other
.cross_below(other)cross_below(other: Expr) -> ExprTrue when self crosses below other
.of_symbol(sym)of_symbol(symbol: str) -> ExprReference column from another symbol
.hour()hour() -> ExprExtract hour (0-23 UTC)
.minute()minute() -> ExprExtract minute (0-59)
.day_of_week()day_of_week() -> ExprExtract day of week (0=Mon)
.month()month() -> ExprExtract month (1-12)
.day_of_month()day_of_month() -> ExprExtract day of month (1-31)
.dema(p)dema(period: int = 14) -> ExprDouble Exponential Moving Average
.tema(p)tema(period: int = 14) -> ExprTriple Exponential Moving Average
.wma(p)wma(period: int = 14) -> ExprWeighted Moving Average
.hma(p)hma(period: int = 14) -> ExprHull Moving Average
.kama(p)kama(period: int = 10) -> ExprKaufman Adaptive Moving Average
.roc(p)roc(period: int = 1) -> ExprRate of Change
python
from manifoldbt.indicators import close

z     = close.zscore(60)
slope = close.linreg_slope(20)
r2    = close.linreg_r2(20)
std   = close.rolling_std(30)
rng   = close.rolling_max(14) - close.rolling_min(14)
cross = close.ewm_mean(10).cross_above(close.ewm_mean(25))

Signals & Sizing

Strategies use the fluent builder pattern. Add named signals with .signal() and set the position sizing expression with .size().

DSL Functions

col(name: str) -> Expr

Reference a data column (e.g. "close", "volume", or a named signal).

lit(value: Any) -> Expr

Create a literal constant expression. Required when a Python number appears on the left side of an operator.

hold() -> Expr

Returns NaN -- tells the engine to hold the current position unchanged.

param(name: str, *, default: Any = None, range: Any = None, description: str = "") -> Expr

Create a sweepable parameter reference. Metadata is auto-collected by Strategy.

ArgumentTypeDefaultDescription
namestr--Parameter name (must be unique)
defaultAnyNoneDefault value
range(min, max)NoneBounds for sweeps
descriptionstr""Human-readable description
symbol_ref(symbol: str, column: str) -> Expr

Reference a column from a specific symbol's data for cross-asset strategies. With dict universe, use qualified names: symbol_ref("binance:BTCUSDT", "close").

exo(name: str, column: Optional[str] = None) -> Expr

Reference an exogenous data column. If column is omitted, defaults to name. Equivalent to col("exo.{name}.{column}"). See Exogenous Data.

asset(symbol: str) -> AssetRef

Create a reference to a specific symbol. Call .col("close") on the returned object.

scan(state: dict[str, Expr], update: dict[str, Expr], output: str) -> Expr

Stateful fold expression. Executes entirely in Rust as a flat register-based scalar VM. Use s.prev("name") for previous state and s.var("name") for intra-step references.

mbt.when()

when(condition: Expr, true_value: Any = 1.0, false_value: Any = NaN) -> Expr

Conditional expression (if/else). Omit true_value to default to 1.0 (full position). Omit false_value to hold current position (NaN).

ArgumentTypeDefaultDescription
conditionExpr--Boolean expression
true_valueAny1.0Value when condition is true
false_valueAnyNaNValue when condition is false (NaN = hold)
python
# Long when oversold, flat when overbought, hold otherwise
signal = mbt.when(zscore < -1.0, 1.0,
         mbt.when(zscore > 1.0, 0.0))

# Simple binary: long or flat
signal = mbt.when(fast > slow, 0.5, 0.0)

# Long/short
signal = mbt.when(fast > slow, 1.0, -1.0)

Arithmetic with mbt.lit()

For expressions that start with a Python number, use mbt.lit() to create an explicit literal:

python
# This works -- Expr on left side
weighted = zscore * 0.5

# This needs lit() -- number on left side
inverse = mbt.lit(1.0) - zscore

Sizing Values

ValueMeaning
1.0Full long position (clamped by max_position_pct)
0.5Half position
0.0Go flat -- exit all positions
-0.5Half short (requires allow_short=True)
NaNHold current position unchanged

Sizing Modes

ModeDescription
FractionOfEquityTarget 1.0 = 100% of current equity (compounds)
FractionOfInitialCapitalTarget 1.0 = 100% of initial capital (no compounding)
UnitsTarget 1.0 = 1 unit (share/contract/coin)
python
execution = mbt.ExecutionConfig(
    position_sizing_mode="FractionOfEquity",  # default
    max_position_pct=1.0,
)

Parameters & Sweeps

Use mbt.param() in indicator periods. Parameters are auto-collected from all signal expressions -- no .param() call on Strategy needed.

python
# Define parameterized indicators
fast_p = mbt.param("fast", default=10)
slow_p = mbt.param("slow", default=25)

fast = ema(close, fast_p)
slow = ema(close, slow_p)

strategy = (
    mbt.Strategy.create("ema_sweep")
    .signal("fast", fast)
    .signal("slow", slow)
    .size(mbt.when(fast > slow, 1.0, 0.0))
)

# Run a parameter sweep (Cartesian product, parallel via rayon)
sweep = mbt.run_sweep(
    strategy,
    param_grid={"fast": [5, 10, 15, 20], "slow": [30, 50, 60]},
    config=config,
    store=store,
)

print(sweep.best("sharpe"))
df = sweep.to_df()

Run Functions

run(strategy: Strategy, config: BacktestConfig, store: DataStore) -> Result

Run a single backtest and return a rich Result with DataFrame conversion, summaries, and plotting methods.

run_sweep(
    strategy: Strategy,
    param_grid: dict[str, list],
    config: BacktestConfig,
    store: DataStore,
    *, max_parallelism: int = 0,
) -> SweepResult

Cartesian product parameter sweep (parallel via rayon). Returns a SweepResult with .to_df(), .best(metric), .plot_metric().

ArgumentTypeDefaultDescription
strategyStrategy--Strategy definition
param_griddict--Mapping of param names to value lists, e.g. {"fast": [10, 20, 30]}
configBacktestConfig--Backtest configuration
storeDataStore--Data store
max_parallelismint0Max threads. 0 = all cores

SweepResult

The SweepResult returned by run_sweep() is an iterable, indexable collection of results with convenience methods for analysis.

Method / AttrSignatureDescription
.best(metric)best(metric: str) -> ResultReturn the result with the highest value for the given metric (e.g. "sharpe")
.worst(metric)worst(metric: str) -> ResultReturn the result with the lowest value for the given metric
.to_df()to_df(backend: str = "auto") -> DataFrameAll results as a DataFrame with params + metrics columns
len(sweep)__len__() -> intNumber of results in the sweep
sweep[i]__getitem__(index: int) -> ResultAccess a single result by index
for r in sweep__iter__() -> Iterator[Result]Iterate over all results
run_sweep_lite(
    strategy: Strategy,
    param_grid: dict[str, list],
    config: BacktestConfig,
    store: DataStore,
    *, max_parallelism: int = 0,
) -> list[BatchResultLite]

Parameter sweep returning only metrics (no Arrow output). Much faster for large grids. Each result has .name, .metrics, .equity, .trade_count.

run_batch(
    strategies: list[Strategy],
    config: BacktestConfig,
    store: DataStore,
    *, max_parallelism: int = 0,
) -> list[Result]

Run many strategies in parallel sharing a single data load. Loads bars once, aligns timestamps once, then evaluates each strategy on a separate rayon thread.

run_batch_lite(
    strategies: list[Strategy],
    config: BacktestConfig,
    store: DataStore,
    *, max_parallelism: int = 0,
) -> list[BatchResultLite]

Like run_batch but returns only metrics (no Arrow output). Ideal for screening many strategies.

Research Functions Pro

run_walk_forward(
    strategy: Strategy,
    wf_config: dict,
    config: BacktestConfig,
    store: DataStore,
) -> dict

Walk-forward analysis (Pro only). Returns dict with folds and best_params_per_fold.

wf_config keyTypeDescription
methodstr"Anchored" or "Rolling"
n_splitsintNumber of folds
train_ratiofloatFraction for training (0, 1)
optimize_metricstre.g. "sharpe", "sortino"
param_griddictParameter grid for optimization
max_parallelismintMax threads
run_sweep_2d(
    strategy: Strategy,
    sweep_config: dict,
    config: BacktestConfig,
    store: DataStore,
) -> dict

2D parameter sweep (heatmap). Returns dict with metric_grid, x_values, y_values.

sweep_config keyTypeDescription
x_paramstrFirst parameter name
x_valueslistValues for x_param
y_paramstrSecond parameter name
y_valueslistValues for y_param
metricstrMetric to collect (e.g. "sharpe")
max_parallelismintMax threads
run_stability(
    strategy: Strategy,
    stability_config: dict,
    config: BacktestConfig,
    store: DataStore,
) -> dict

Parameter stability analysis. Returns dict with stability_score, metric_values, mean_metric, std_metric.

stability_config keyTypeDescription
param_namestrParameter to vary
valueslistValues to test
metricstrMetric to evaluate
max_parallelismintMax threads
python
results = mbt.run_sweep_lite(
    strategy,
    param_grid={"fast": list(range(5, 100)), "slow": list(range(50, 200))},
    config=config,
    store=store,
)
# Each result has: .name, .metrics, .equity, .trade_count

Stochastic Simulation

Generate synthetic price paths from stochastic differential equations (SDEs). Define custom models via string expressions, all compiled to native Rust and executed with Rayon parallelism (CPU) or CUDA (GPU). No Python callback overhead.

run_stochastic(
    model,
    *, s0=100.0, n_paths=1000, n_steps=252, dt=1/252,
    params=None, seed=None, confidence_levels=None,
    store_paths=False, device="cpu",
) -> dict

Run a stochastic simulation. Returns dict with final_price, final_return, max_drawdown, annualized_return, annualized_vol (each with percentiles, mean, std, min, max), and optionally paths (Arrow array).

ArgumentTypeDefaultDescription
modelstr | StochasticModel--Preset name ("gbm", "heston", "merton", "garch_jd") or a StochasticModel instance
s0float100.0Initial price
n_pathsint1000Number of simulation paths (Community: max 1000)
n_stepsint252Time steps per path
dtfloat1/252Time step in years (1/252 = daily, 1/252/390 = minute)
paramsdict | NoneNoneParameter overrides (merged with model defaults)
seedint | NoneNoneRNG seed for reproducibility
store_pathsboolFalseStore full price paths (memory-intensive, CPU only)
devicestr"cpu""cpu" (Rayon) or "cuda" (GPU, requires --features cuda build)

Built-in Presets

PresetSDEDefault Params
"gbm"dS = μSdt + σSdWmu=0.05, sigma=0.2
"heston"dS = μSdt + √v·SdW, dv = κ(θ−v)dt + ξ√v dW₂mu=0.05, kappa=2, theta=0.04, xi=0.3
"merton"dS = μSdt + σSdW + JSdN(λ)mu=0.05, sigma=0.2, lambda=1, mu_j=-0.05, sigma_j=0.08
"garch_jd"dS = μSdt + √h·SdW + JSdN, h₀=ω+α(r−μ)²+βhmu=0.08, omega=1e-6, alpha=0.1, beta=0.85, lambda=5, mu_j=-0.02, sigma_j=0.04

Custom Models (StochasticModel)

StochasticModel(
    *, drift: str, diffusion: str,
    jump_intensity: str = None,
    jump_size: str = None,
    state_vars: dict = None,
    state_update: dict = None,
    params: dict = None,
    name: str = "custom",
)

Define a custom SDE via string expressions. The model has the form:
dS = drift(S,t,state) · S · dt + diffusion(S,t,state) · S · dW + jump_size · dN(jump_intensity)

ArgumentTypeDescription
driftstrDrift expression (μ). E.g. "mu" or "mu - 0.5 * h"
diffusionstrDiffusion expression (σ). E.g. "sigma" or "sqrt(h)"
jump_intensitystrJump rate (λ). E.g. "lambda"
jump_sizestrJump magnitude. E.g. "normal(mu_j, sigma_j)"
state_varsdictExtra state variables with initial values. E.g. {"h": 1e-4}
state_updatedictUpdate expressions for state vars after each step
paramsdictModel parameters (name → value)

Available identifiers in expressions: any key in params, any key in state_vars, S (price), ret (last log-return), dt, t, step.

Available functions: sqrt, abs, log, exp, floor, max, min, pow, normal(mu, sigma), uniform(lo, hi), randn(), if(cond, then, else).

Operators: + - * / **, > < >= <= ==, && || !, ternary cond ? a : b.

python
# Preset, GBM with 10M paths on GPU
result = mbt.run_stochastic(
    "gbm", s0=100, n_paths=10_000_000, n_steps=252, dt=1/252,
    params={"mu": 0.05, "sigma": 0.2}, seed=42, device="cuda",
)
print(result["final_price"]["mean"])  # ~105.12

# Custom GARCH(1,1) Jump Diffusion
model = mbt.StochasticModel(
    drift="mu",
    diffusion="sqrt(h)",
    jump_intensity="lambda",
    jump_size="normal(mu_j, sigma_j)",
    state_vars={"h": 1e-4},
    state_update={"h": "omega + alpha * (ret - mu) ** 2 + beta * h"},
    params={"mu": 0.08, "omega": 1e-6, "alpha": 0.1, "beta": 0.85,
            "lambda": 5.0, "mu_j": -0.02, "sigma_j": 0.04},
)
result = mbt.run_stochastic(model, s0=100, n_paths=10_000_000, device="cuda")

# Custom mean-reverting model
mean_rev = mbt.StochasticModel(
    drift="kappa * (log(100.0) - log(S))",
    diffusion="sigma",
    params={"kappa": 2.0, "sigma": 0.25},
)
result = mbt.run_stochastic(mean_rev, s0=80, n_paths=1000, store_paths=True)
mbt.plot.stochastic_paths(result, show=True)

Configuration

BacktestConfig

FieldTypeDescription
universeList[int|str] | Dict[str, List[str]]Symbol IDs, ticker names, or dict mapping provider to symbols (e.g. {"binance": ["BTC-USDT:perp"]}). Dict format auto-resolves symbol_names
time_range_startint (ns)Start timestamp. Use time_range("2021-01-01", "2026-01-01")
time_range_endint (ns)End timestamp
bar_intervaldictBar resolution. Use Interval.minutes(1), Interval.hours(12), etc.
initial_capitalfloatStarting capital (default 1000.0)
warmup_barsintBars to skip before trading (let indicators stabilize)
accuracyboolWhen True, simulation runs on 1m bars (hybrid mode)
output_resolutiondictDownsample output timeseries. Pro: sub-daily
trading_days_per_yearfloatAnnualization factor (365.25 crypto, 252 equities)
symbol_namesdictName-to-ID mapping for cross-asset references
currencystrQuote currency for the portfolio (default "USD")
executionExecutionConfigExecution settings (signal delay, fill model, sizing mode, etc.)
feesFeeConfigFee model. Use FeeConfig.binance_perps() or construct manually
slippageAnySlippage model. Use Slippage.fixed_bps(), .volume_impact(), etc.
exo_dataList[str]Exogenous data series names to inject (e.g. ["hashrate", "fear_greed"]). See Exogenous Data
extra_timeframesDict[str, dict]Additional timeframes for multi-TF strategies (e.g. {"4h": Interval.hours(4)})
rng_seedOptional[int]Random seed for reproducible Monte Carlo and stochastic fills
resample_toOptional[dict]Resample loaded bars before simulation (e.g. Interval.hours(4))

Interval Helpers

python
Interval.seconds(1)    # {"Seconds": 1}
Interval.minutes(1)    # {"Minutes": 1}
Interval.hours(12)     # {"Hours": 12}
Interval.days(1)       # {"Days": 1}

FeeConfig

FeeConfig(
    maker_fee_bps: float = 0.0,
    taker_fee_bps: float = 0.0,
    funding_rate_column: Optional[str] = None,
    borrow_rate_annual_bps: float = 0.0,
    min_fee: float = 0.0,
    default_fill_type: str = "Taker",
)
FieldTypeDefaultDescription
maker_fee_bpsfloat0.0Maker fee in basis points
taker_fee_bpsfloat0.0Taker fee in basis points
funding_rate_columnstr | NoneNoneColumn name for funding rate (perps only)
borrow_rate_annual_bpsfloat0.0Annual borrow rate in bps (for shorts)
min_feefloat0.0Minimum fee per trade
default_fill_typestr"Taker""Maker" or "Taker" (conservative default)

FeeConfig Presets

PresetMakerTakerFunding
FeeConfig.binance_perps()2 bps5 bpsfunding_rate column
FeeConfig.binance_spot()10 bps10 bpsNone
FeeConfig.zero()00None

Slippage Models

Slippage models simulate the cost of market impact when executing trades. The engine applies slippage after the fill price is determined, adjusting the effective price against the trader.

FixedBps -- Constant Basis Points

Slippage.fixed_bps(bps: float) -> dict

Applies a fixed cost in basis points (1 bps = 0.01%) on every fill. The effective fill price shifts by bps / 10000 * price against the trade direction (up for buys, down for sells).

ArgumentTypeDescription
bpsfloatSlippage in basis points per trade. 1.0 = 0.01%, 5.0 = 0.05%

Formula: effective_price = fill_price * (1 + bps/10000) for buys, * (1 - bps/10000) for sells.

Use case: Quick research on liquid markets (crypto majors, large-cap equities). Simple and predictable.

VolumeImpact -- Market Impact Model

Slippage.volume_impact(impact_coeff: float, exponent: float = 1.5) -> dict

Models market impact as a function of participation rate (order size relative to bar volume). Larger orders relative to available liquidity incur more slippage. Based on the square-root market impact model.

ArgumentTypeDefaultDescription
impact_coefffloat--Impact coefficient. Higher = more slippage. Typical: 0.05-0.3
exponentfloat1.5Power law exponent. 0.5 = square-root (Almgren), 1.0 = linear, 1.5 = super-linear

Formula: slippage = impact_coeff * (order_qty / bar_volume) ^ exponent

Use case: Capacity analysis, illiquid assets, large position sizes. Essential for strategies trading more than 1-5% of bar volume.

SpreadBased -- Bid-Ask Spread

Slippage.spread_based(spread_fraction: float = 1.0) -> dict

Uses the actual bid-ask spread from bar data. Slippage = fraction of the spread. Requires bid and ask columns in the data.

ArgumentTypeDefaultDescription
spread_fractionfloat1.0Fraction of half-spread to apply. 1.0 = full half-spread, 0.5 = half

Formula: slippage = spread_fraction * (ask - bid) / 2

Use case: When bid/ask spread data is available. Most realistic for market orders on venues with order book data.

No Slippage

Slippage.none() -> dict

Zero slippage. Use only for debugging or when slippage is already embedded in the fill price model.

python
# Liquid crypto (BTC, ETH) -- 1-3 bps
Slippage.fixed_bps(2.0)

# Mid-cap altcoins -- volume impact matters
Slippage.volume_impact(0.1, exponent=0.5)

# With order book data -- most realistic
Slippage.spread_based(spread_fraction=1.0)

# Development only
Slippage.none()

ExecutionConfig

ExecutionConfig(
    signal_delay: int = 0,
    execution_price: str = "AtClose",
    max_position_pct: float = 1.0,
    allow_short: bool = True,
    allow_fractional: bool = True,
    skip_gap_bars: bool = False,
    position_sizing_mode: str = "FractionOfEquity",
    pyramiding: bool = False,
    fill_model: Optional[dict] = None,
    orders: Optional[OrderConfig] = None,
)
FieldTypeDefaultDescription
signal_delayint0Bars to delay signal execution. Use 1 to avoid look-ahead
execution_pricestr"AtClose"Fill price: AtClose, AtOpen, AtVwap, NextBarOpen, NextBarClose, NextBarVwap, MidPrice
max_position_pctfloat1.0Maximum position as fraction of equity
allow_shortboolTrueAllow negative positions
allow_fractionalboolTrueAllow fractional units
skip_gap_barsboolFalseSkip bars with gaps in data
position_sizing_modestr"FractionOfEquity"FractionOfEquity, FractionOfInitialCapital, or Units
pyramidingboolFalseWhen True, signal is a delta to ADD to current position each bar
fill_modeldict | NoneNoneFill model config. Use FillModel.participation(0.1)
ordersOrderConfig | NoneNoneOrder management: SL, TP, trailing stops

ExecutionPrice Constants

ConstantValueDescription
ExecutionPrice.AT_CLOSE"AtClose"Fill at current bar's close
ExecutionPrice.AT_OPEN"AtOpen"Fill at current bar's open
ExecutionPrice.AT_VWAP"AtVwap"Fill at current bar's VWAP
ExecutionPrice.NEXT_BAR_OPEN"NextBarOpen"Fill at next bar's open
ExecutionPrice.NEXT_BAR_CLOSE"NextBarClose"Fill at next bar's close
ExecutionPrice.NEXT_BAR_VWAP"NextBarVwap"Fill at next bar's VWAP
ExecutionPrice.MID_PRICE"MidPrice"Fill at mid-price
ExecutionPrice.custom(col){"Custom": col}Fill at a named column

FillModel Helpers

FillModel.atomic()                          # entire order at single price (default)
FillModel.participation(rate=0.1)           # fill max 10% of bar volume
FillModel.participation(0.1, "TypicalPrice") # intra-bar price: SinglePoint, TypicalPrice, OhlcAverage

Orders

Attach stop-loss, take-profit, and trailing stop orders via the fluent builder:

python
strategy = (
    mbt.Strategy.create("with_stops")
    .signal("z", zscore)
    .size(signal)
    .stop_loss(pct=3.0)          # exit if 3% below entry
    .take_profit(pct=5.0)       # exit if 5% above entry
    .trailing_stop(pct=2.0)     # trail 2% from peak
)

Alternatively, use OrderConfig directly on the execution config:

python
from manifoldbt.config import OrderConfig

execution = mbt.ExecutionConfig(
    orders=OrderConfig.bracket(stop_pct=3.0, profit_pct=5.0),
)

Dataset Auto-Resolution

The engine automatically selects the optimal dataset based on bar_interval:

bar_intervalDatasetNotes
<= 1mbars_1m or provider/1m/Finest resolution
> 1m, <= 1hbars_1h or provider/1h/Resampled from 1h bars
> 1hbars_1h or provider/1h/Resampled from coarsest available

The engine tries the provider layout first (binance/1h/BTCUSDT.arrow), then falls back to legacy (bars_1h/201.arrow). Non-exact intervals (e.g. 4h, 1d) are resampled from the finest available resolution.

Accuracy Mode

When accuracy=True, the engine always loads bars_1m. Signals are evaluated at bar_interval resolution, but the simulation loop runs on 1-minute bars for precise intrabar SL/TP fills and drawdown tracking.

python
config = mbt.BacktestConfig(
    bar_interval=Interval.hours(12),
    accuracy=True,  # hybrid: signals on 12h, sim on 1m
    ...
)

Exogenous Data

Inject external data series (hashrate, funding rates, sentiment, on-chain metrics) into your strategy expressions. The engine ASOF-joins exogenous data onto bar timestamps at the requested resolution.

Register

Use mbt.register_exo() to write a DataFrame to the local store. Requires a timestamp column and one or more value columns.

python
import pandas as pd

df = pd.DataFrame({
    "timestamp": pd.date_range("2020-01-01", "2026-01-01", freq="D", tz="UTC"),
    "hashrate": hashrate_values,  # your data
})

mbt.register_exo("hashrate", df, store=store)
# Writes to: data/mega/exo/hashrate.arrow

Declare in Config

List the exo series names in exo_data:

python
config = mbt.BacktestConfig(
    exo_data=["hashrate", "fear_greed"],
    ...
)

Use in Expressions

Access exo columns with the exo() helper or directly via col():

python
from manifoldbt.expr import exo, col
from manifoldbt.indicators import ema, close

# exo() helper (shorthand)
hr = exo("hashrate")
hr_smooth = ema(hr, 30)

# Equivalent col() syntax
hr = col("exo.hashrate.hashrate")

# Multi-column exo series
active_addr = exo("onchain", "active_addresses")

# Use in strategy: spread between price and hashrate
price_ratio = close / ema(close, 30)
hr_ratio = hr / ema(hr, 30)
spread = price_ratio - hr_ratio
spread_z = spread.zscore(90)

register_exo() Reference

ArgumentTypeDefaultDescription
namestr--Series identifier (e.g. "hashrate")
dataDataFrame | dict--Must have a timestamp column. Value columns cast to Float64
storeDataStoreNoneTarget store. If None, uses default data root
providerstrNoneProvider for unified layout (e.g. "binance"). Omit for global exo
timeframestr"1d"Timeframe label for unified layout

Cross-Asset References

Access columns from other symbols using mbt.symbol_ref() or the .of_symbol() method. This enables stat-arb, relative-value, and basket strategies.

python
# Method 1: symbol_ref()
btc_close = mbt.symbol_ref("BTCUSDT", "close")

# Method 2: .of_symbol() on any column
btc_close = mbt.col("close").of_symbol("BTCUSDT")

# Method 3: asset() helper
btc = mbt.asset("BTCUSDT")
btc_close = btc.col("close")

# Use in strategy
spread = mbt.col("close") - btc_close
strategy = (
    mbt.Strategy.create("arb")
    .signal("btc_close", btc_close)
    .signal("spread", spread)
    .size(mbt.when(spread.zscore(60) < -2.0, 1.0, 0.0))
)

Cross-Exchange Backtesting Pro

Compute signals from one exchange's data and execute trades at another exchange's prices. Spread arbitrage, cross-venue basis trades, or simply using a venue with better data quality for signal generation while trading on another.

Universe Dict Format

The simplest way to set up cross-exchange strategies is with the dict universe format. Each key is a provider name, each value is a list of normalized symbols:

python
config = mbt.BacktestConfig(
    universe={
        "dydx":    ["BTC-USD:perp"],      # execution (fills here)
        "binance": ["BTC-USDT:perp"],     # signal source (via symbol_ref)
    },
    bar_interval=Interval.hours(6),
    initial_capital=10_000,
    warmup_bars=30,
    execution=mbt.ExecutionConfig(signal_delay=1),
    fees=mbt.FeeConfig(maker_fee_bps=1.0, taker_fee_bps=2.5),
    slippage=Slippage.fixed_bps(2),
    ...
)

The engine automatically resolves each provider:symbol pair to its symbol ID and populates symbol_names. The first provider's symbols are the execution targets (fills happen at their prices).

Full Example, RSI Signal from Binance, Execution on dYdX

python
import manifoldbt as mbt
from manifoldbt.indicators import rsi, ema
from manifoldbt.expr import col, symbol_ref, lit, when
from manifoldbt.helpers import time_range, Interval, Slippage

# Signal, RSI + EMA from Binance BTC
bn_btc_close = symbol_ref("binance:BTC-USDT:perp", "close")
bn_btc_rsi   = rsi(bn_btc_close, 14)
bn_ema_fast  = ema(bn_btc_close, 15)
bn_ema_slow  = ema(bn_btc_close, 30)
trend_up     = bn_ema_fast > bn_ema_slow

# Size uses named signals (col references, not inline SymbolRef)
signal = when(
    (col("trend") > lit(0.5)) & (col("bn_rsi") > lit(70.0)), 1.0,
    when((col("trend") < lit(0.5)) & (col("bn_rsi") < lit(30.0)), -1.0,
    0.0),
)

# Strategy, register signals, then use them in size
strategy = (
    mbt.Strategy.create("cross_exchange_rsi")
    .signal("bn_rsi", bn_btc_rsi)
    .signal("trend", when(trend_up, 1.0, 0.0))
    .size(signal)
)

# Config, dict universe handles everything
start, end = time_range("2024-02-01", "2025-03-01")
config = mbt.BacktestConfig(
    universe={
        "dydx":    ["BTC-USD:perp"],
        "binance": ["BTC-USDT:perp"],
    },
    time_range_start=start, time_range_end=end,
    bar_interval=Interval.hours(6),
    initial_capital=10_000,
    warmup_bars=30,
    execution=mbt.ExecutionConfig(signal_delay=1),
    fees=mbt.FeeConfig(maker_fee_bps=1.0, taker_fee_bps=2.5),
    slippage=Slippage.fixed_bps(2),
)

result = mbt.run(strategy, config, store)

Multi-Timeframe Strategies

Combine signals from different timeframes in a single strategy. Use a slow trend filter on higher-timeframe bars while entering positions on faster bars, all in one vectorized backtest.

How It Works

The engine resamples native bars to each extra timeframe, then forward-fills the completed values back onto the native grid. A completed 1h bar becomes available at the start of the next 1h period, no lookahead bias.

python
import manifoldbt as mbt
from manifoldbt.indicators import ema, rsi, close
from manifoldbt.helpers import Interval

# 1. Reference higher timeframes
h4 = mbt.tf("4h")
d1 = mbt.tf("1d")

# 2. Build indicators on any timeframe
trend   = ema(d1.close, 20) > ema(d1.close, 50)   # daily trend
dip     = rsi(h4.close, 14) < 30.0              # 4h oversold
trigger = rsi(close, 14) < 25.0                 # native entry

# 3. Combine in strategy
strategy = (
    mbt.Strategy.create("multi_tf")
    .signal("trend", trend)
    .signal("dip", dip)
    .signal("trigger", trigger)
    .size(mbt.when(
        mbt.col("trend") & mbt.col("dip") & mbt.col("trigger"),
        0.5, 0.0
    ))
    .stop_loss(pct=3.0)
)

Config

Declare the extra timeframes in BacktestConfig. The label (e.g. "4h") must match what you pass to mbt.tf().

python
config = mbt.BacktestConfig(
    bar_interval=Interval.minutes(15),   # native resolution
    extra_timeframes={
        "4h": Interval.hours(4),
        "1d": Interval.days(1),
    },
    warmup_bars=60,                       # account for slower TF indicators
    # ... other config
)

tf() Reference

mbt.tf(label) returns a TimeframeRef with shortcuts for OHLCV columns:

PropertyEquivalent
tf("4h").opencol("4h.open")
tf("4h").highcol("4h.high")
tf("4h").lowcol("4h.low")
tf("4h").closecol("4h.close")
tf("4h").volumecol("4h.volume")
tf("4h").col("x")col("4h.x")

Diagnostics Pro

The manifoldbt.diagnostics module provides automated checks to catch common strategy bugs before you trust your results.

detect_lookahead()

detect_lookahead(
    strategy: Strategy,
    config: BacktestConfig,
    store: DataStore,
    *,
    mode: str = "all",
    tolerance: float = 1e-9,
) -> DiagnosticsResult

Detect look-ahead bias -- both global and rolling. Automatically splits the config's time range and compares trades from shorter runs against the full run. Data is loaded once and sliced for each sub-test.

ArgumentTypeDefaultDescription
strategyStrategy--Strategy definition
configBacktestConfig--Backtest configuration
storeDataStore--Data store
modestr"all""all", "extension", or "truncation"
tolerancefloat1e-9Float comparison tolerance for quantity/price/fees

Returns DiagnosticsResult with .passed (bool), .assert_clean() (raises on failure), and .reports (list of sub-test results).

python
from manifoldbt.diagnostics import detect_lookahead

report = detect_lookahead(strategy, config, store)
print(report)          # PASS or FAIL with details
report.assert_clean()   # raises AssertionError if bias detected

check_exposure_stability()

check_exposure_stability(
    strategy: Strategy,
    config: BacktestConfig,
    store: DataStore,
    *,
    mode: str = "all",
    tolerance: float = 1e-6,
) -> ExposureDiagnosticsResult

Check that exposure/utilization is identical across time windows. Runs the strategy on different sub-periods and compares utilization, exposure, and per-symbol positions at overlapping timestamps.

ArgumentTypeDefaultDescription
strategyStrategy--Strategy definition
configBacktestConfig--Backtest configuration
storeDataStore--Data store
modestr"all""all", "extension", or "truncation"
tolerancefloat1e-6Absolute tolerance for float comparisons

Returns ExposureDiagnosticsResult with .passed, .assert_clean().

python
from manifoldbt.diagnostics import check_exposure_stability

report = check_exposure_stability(strategy, config, store)
print(report)
report.assert_clean()

risk_check()

risk_check(
    result: Result,
    *,
    max_utilization: float = 0.95,
    min_free_margin: float = 0.05,
    max_exposure_ratio: float = 3.0,
    max_utilization_trend: float = 1e-4,
    max_concentration: float = 0.95,
) -> RiskReport

Systematic risk checks on a backtest result. Analyzes free margin, utilization, leverage, and concentration over the full period.

ArgumentTypeDefaultDescription
resultResult--BacktestResult from bt.run()
max_utilizationfloat0.95Fail if peak utilization exceeds this
min_free_marginfloat0.05Fail if free margin ratio drops below this
max_exposure_ratiofloat3.0Fail if exposure / initial_capital exceeds this
max_utilization_trendfloat1e-4Warn if utilization slope per bar exceeds this
max_concentrationfloat0.95Warn if peak Herfindahl index exceeds this

Returns RiskReport with .passed, .clean, .assert_clean(), .checks (list), and time-series arrays: .timestamps, .utilization, .free_margin_ratio, .concentration.

python
from manifoldbt.diagnostics import risk_check

report = risk_check(result, max_utilization=0.95, min_free_margin=0.05)
print(report)
report.assert_clean()

Profiling

Every result includes a profile dict with microsecond-resolution timing for each execution phase:

KeyDescription
data_load_usTime to load and deserialize bar data from disk
align_usTime to align multi-symbol timestamps
signal_eval_usTime to evaluate the expression graph
runtime_prep_usTime to prepare simulation runtime
simulation_usTime to run the core simulation loop
output_build_usTime to build Arrow output arrays
total_usTotal wall-clock time
python
print(result.profile_summary())
# Profile (total: 142.3ms)
# --------------------------------------------
#   Data loading     48.2ms   33.9%  #############
#   Signal eval      12.1ms    8.5%  ###
#   Simulation       78.4ms   55.1%  ######################

Result

The Result class wraps the Rust BacktestResult with DataFrame conversion, summaries, plotting shortcuts, and Jupyter rich display.

DataFrame Methods

result.equity_df(backend: str = "auto") -> DataFrame

Equity curve as a DataFrame with timestamp and equity columns.

ArgumentTypeDefaultDescription
backendstr"auto""pandas", "polars", or "auto" (detects installed)
result.trades_df(backend: str = "auto") -> DataFrame

Trades table with all trade fields (timestamps, side, quantity, fill_price, fees, etc.).

result.positions_df(backend: str = "auto") -> DataFrame

Position trace with per-bar position, equity, capital, close price, symbol_id.

result.daily_returns_series(backend: str = "auto") -> Series

Daily returns as a Series.

Summary & Profiling

result.summary() -> str

Pretty-printed performance summary with returns, ratios, and trade statistics.

result.profile_summary() -> str

Timing breakdown of each execution phase (data load, signal eval, simulation, etc.).

Plotting

result.plot(kind: str = "tearsheet", **kwargs) -> Any

Plot backtest results. Available kinds: "tearsheet", "equity", "drawdown", "monthly_returns", "summary", "annual_returns", "rolling_sharpe", "rolling_volatility", "returns_histogram".

result.plot_equity(**kwargs) -> Any
result.plot_drawdown(**kwargs) -> Any
result.plot_monthly_returns(**kwargs) -> Any

Shortcut methods for common chart types.

Comparison

result.compare(*others: Result, backend: str = "auto") -> DataFrame

Compare metrics across multiple results. Returns a DataFrame with one row per result and all metrics as columns.

python
result = mbt.run(strategy, config, store)

# Summary
print(result.summary())
print(result.profile_summary())

# DataFrames
eq_df = result.equity_df()
trades = result.trades_df()
positions = result.positions_df()

# Plot
result.plot("tearsheet")
result.plot_equity(show=True)

# Compare multiple strategies
r1 = mbt.run(strat_a, config, store)
r2 = mbt.run(strat_b, config, store)
df = r1.compare(r2)

# Raw attributes (delegated to Rust)
result.metrics        # dict of performance metrics
result.trade_count    # number of trades
result.equity_curve   # raw equity array
result.trades         # Arrow table of trades
result.positions      # Arrow table of positions
result.manifest       # run manifest (config + strategy)
result.profile        # timing dict

Strategy Builder

The Strategy class supports both direct construction and a fluent builder pattern.

Direct Construction

Strategy(
    name: str,
    signals: Optional[dict[str, Expr]] = None,
    position_sizing: Optional[Expr] = None,
    parameters: Optional[dict[str, Expr]] = None,
    constraints: Optional[list] = None,
    description: Optional[str] = None,
)

Fluent Builder

MethodSignatureDescription
Strategy.create(name)create(name: str) -> StrategyCreate an empty strategy for chaining
.signal(name, expr)signal(name: str, expr: Expr) -> StrategyAdd a named signal expression
.size(expr)size(expr: Expr) -> StrategySet the position sizing expression
.param(name, ...)param(name, default, range, description) -> StrategyRegister a sweep parameter
.stop_loss(pct)stop_loss(pct: float) -> StrategyAttach a stop-loss order
.take_profit(pct)take_profit(pct: float) -> StrategyAttach a take-profit order
.trailing_stop(pct)trailing_stop(pct: float, use_high=True) -> StrategyAttach a trailing stop
.describe(text)describe(text: str) -> StrategySet strategy description
python
strategy = (
    mbt.Strategy.create("ema_crossover")
    .signal("fast", ema(close, 10))
    .signal("slow", ema(close, 25))
    .signal("trend", mbt.col("fast") > mbt.col("slow"))
    .size(mbt.when(mbt.col("trend"), 0.5, 0.0))
    .stop_loss(pct=2.0)
    .describe("Simple EMA crossover with stop-loss")
)

Helpers

Convenience functions from manifoldbt.helpers for configuration.

time_range(start: str, end: str) -> Tuple[int, int]

Convert two date strings to a (start_ns, end_ns) tuple. Accepts "YYYY-MM-DD" and "YYYY-MM-DD HH:MM:SS" formats.

date_to_ns(date_str: str) -> int

Convert a single date string to nanoseconds since Unix epoch (UTC).

python
from manifoldbt.helpers import time_range, Slippage, Interval

start, end = time_range("2022-01-01", "2024-01-01")

config = mbt.BacktestConfig(
    time_range_start=start,
    time_range_end=end,
    slippage=Slippage.fixed_bps(1.0),
    bar_interval=Interval.minutes(1),
)

Metrics

The result.metrics dict contains all computed performance metrics. Access via result.summary() for a formatted view or result.metrics["sharpe"] for programmatic use.

Performance Metrics

MetricDescription
total_returnCumulative return over the period
cagrCompound annual growth rate
volatilityAnnualized standard deviation of returns
sharpeAnnualized Sharpe ratio (excess return / volatility)
sortinoSortino ratio (downside deviation only)
calmarCAGR / max drawdown
max_drawdownLargest peak-to-trough decline
tstat_sharpeT-statistic of the Sharpe ratio
alphaJensen's alpha (vs buy-and-hold)
betaBeta to the underlying asset
tstat_alphaT-statistic of alpha
skewnessReturn distribution skewness
kurtosisReturn distribution excess kurtosis
best_dayBest single-day return
worst_dayWorst single-day return
pct_positive_daysFraction of days with positive returns

Trade Statistics

Nested under result.metrics["trade_stats"]:

MetricDescription
total_tradesNumber of round-trip trades
win_rateFraction of profitable trades
profit_factorGross profit / gross loss
expectancyAverage P&L per trade
avg_winAverage winning trade P&L
avg_lossAverage losing trade P&L
total_feesCumulative fees paid

Visualization

The mbt.plot module provides publication-quality charts. Install with pip install manifoldbt[plot].

Composite Layouts Pro

tearsheet(
    result,
    *,
    benchmark=None,
    title: Optional[str] = None,
    show: bool = False,
    save: Optional[str | Path] = None,
    dpi: int = 150,
) -> str

Self-contained HTML tearsheet with all charts and metrics. Returns the HTML string. Opens in browser when show=True, writes to disk when save is given.

research_report(
    sweep_result: Optional[dict] = None,
    wf_result: Optional[dict] = None,
    stability_result: Optional[dict] = None,
    *,
    title: str = "Research Report",
    figsize: tuple = (14, 6),
    show: bool = False,
    save: Optional[str | Path] = None,
    dpi: int = 150,
) -> List[Figure]

Research report -- one matplotlib Figure per analysis (sweep, walk-forward, stability). At least one result required.

Backtest Result Charts

summary(result, *, figsize=(14, 8), show=False, save=None) -> Figure

Compact summary panel: TWR equity + vol-adjusted buy-and-hold benchmark, daily trade count, used margin %.

equity(result, *, ax=None, color="#5b7ff5", title="Equity Curve", figsize=(14, 5), show=False, save=None) -> Figure

Portfolio equity curve over time with filled area.

benchmark_equity(
    result, benchmark: ndarray,
    *, ax=None, strategy_color="#5b7ff5", benchmark_color="#3a3a40",
    normalize=True, labels=("Strategy", "Buy & Hold"),
    title="Strategy vs Benchmark", figsize=(14, 5), show=False, save=None,
) -> Figure

Strategy equity overlayed with a benchmark array, both normalized to 100.

drawdown(result, *, ax=None, color="#e85d75", title="Drawdown", figsize=(14, 3), show=False, save=None) -> Figure

Drawdown as a filled area chart (peak-to-trough decline).

monthly_returns(result, *, ax=None, annotate=True, title="Monthly Returns (%)", figsize=(12, 5), show=False, save=None) -> Figure

Heatmap of monthly returns (year rows x month columns + annual total).

annual_returns(result, *, ax=None, title="Annual Returns", figsize=(10, 4), show=False, save=None) -> Figure

Annual returns bar chart with green/red conditional coloring.

returns_histogram(result, *, ax=None, bins=100, title="Returns Distribution", figsize=(12, 5), show=False, save=None) -> Figure

Histogram of daily returns with normal fit overlay.

var_chart(result, *, ax=None, confidence=0.05, bins=120, title="Value at Risk", figsize=(12, 5), show=False, save=None) -> Figure

Returns histogram with VaR and CVaR lines at 5% and 1% levels.

rolling_sharpe(result, *, windows=None, ax=None, title="Rolling Sharpe", trading_days_per_year=365.25, figsize=(14, 4), show=False, save=None) -> Figure

Rolling annualized Sharpe ratio. Default windows: [126, 252].

rolling_volatility(result, *, windows=None, ax=None, title="Rolling Volatility", trading_days_per_year=365.25, figsize=(14, 4), show=False, save=None) -> Figure

Rolling annualized volatility. Default windows: [126, 252].

All backtest chart functions accept: ax (optional matplotlib Axes to draw on), show (display immediately), save (write to file path). They all return a matplotlib Figure.

Research Charts

heatmap_2d(
    sweep_result: dict,
    *, ax=None, annotate=True, fmt=".3f", highlight_best=True,
    title=None, figsize=(10, 8), show=False, save=None,
) -> Figure

2D parameter sweep heatmap from run_sweep_2d() result. Uses Gaussian smoothing to highlight the plateau-optimal best region (overfit-resistant). Annotations auto-disabled when grid exceeds 100 cells.

ArgumentTypeDefaultDescription
sweep_resultdict--Output of run_sweep_2d()
annotateboolTrueShow values in cells (if grid <= 100 cells)
fmtstr".3f"Number format for annotations
highlight_bestboolTrueHighlight the plateau-optimal cell
surface_3d(
    sweep_result: dict,
    *, highlight_best=True, title=None, figsize=(12, 8),
    elev=30, azim=-45, show=False, save=None,
) -> Figure

3D surface plot from a 2D parameter sweep result. Same input format as heatmap_2d.

ArgumentTypeDefaultDescription
sweep_resultdict--Output of run_sweep_2d()
elevfloat30Camera elevation angle
azimfloat-45Camera azimuth angle
highlight_bestboolTrueMark the best point with a white dot
monte_carlo(
    result,
    *, n_simulations=1000, method="bootstrap",
    percentiles=None, n_sample_paths=50,
    ax=None, median_color="#5b7ff5", band_color="#5b7ff5",
    title=None, figsize=(12, 5), seed=None, show=False, save=None,
) -> Figure

Monte Carlo fan chart with percentile bands, sample paths, and risk stats.

ArgumentTypeDefaultDescription
resultResult--BacktestResult from bt.run()
n_simulationsint1000Number of simulated paths (capped at 1000 for Community)
methodstr"bootstrap""bootstrap" (tail risk) or "permutation" (path-dependency)
percentileslist[int][5,25,50,75,95]Percentile levels for bands
n_sample_pathsint50Number of individual paths to draw (faded). 0 to disable
seedint | NoneNoneRandom seed for reproducibility
walk_forward(
    wf_result: dict,
    *, mode="auto", full_result=None,
    ax=None, is_color="#5b7ff5", oos_color="#f5a623",
    title=None, figsize=(10, 5), show=False, save=None,
) -> Figure

Walk-forward analysis chart with multiple display modes.

ArgumentTypeDefaultDescription
wf_resultdict--Output of run_walk_forward()
modestr"auto""auto", "equity", "bars", or "stitched"
full_resultResult | NoneNoneFull backtest result for "stitched" mode baseline
stability(
    stability_result: dict,
    *, ax=None, line_color="#5b7ff5", band_color="#5b7ff5",
    band_alpha=0.15, title=None, figsize=(10, 5), show=False, save=None,
) -> Figure

Parameter stability chart with mean +/- std shaded bands. Shows stability score.

stochastic_paths(
    result: dict,
    *, percentiles=None, n_sample_paths=50,
    ax=None, median_color="#5b7ff5", band_color="#5b7ff5",
    title=None, figsize=(12, 5), show=False, save=None,
) -> Figure

Fan chart for stochastic simulation paths with percentile bands and risk stats. Requires store_paths=True in run_stochastic().

ArgumentTypeDefaultDescription
resultdict--Output of run_stochastic(..., store_paths=True)
percentileslist[int][5,25,50,75,95]Percentile levels for bands
n_sample_pathsint50Number of individual paths to draw (faded). 0 to disable
correlation_matrix(
    symbols: list[str], matrix: list[list[float]],
    *, ax=None, annotate=True, title="Correlation Matrix",
    figsize=(8, 7), show=False, save=None,
) -> Figure

Symbol correlation matrix heatmap.

chart(
    result, store: DataStore, symbol_id: int,
    *, emas=None, smas=None, n_bars=120,
    interactive=True, figsize=(14, 7), show=False, save=None,
)

Candlestick chart with trade markers and indicator overlays. Uses Plotly when interactive=True, matplotlib otherwise.

ArgumentTypeDefaultDescription
resultResult--BacktestResult
storeDataStore--Data store (to load OHLC bars)
symbol_idint--Which symbol to chart
emaslist[int] | NoneNoneEMA periods to overlay (e.g. [10, 25])
smaslist[int] | NoneNoneSMA periods to overlay
n_barsint120Number of bars to display (last N)
interactiveboolTruePlotly (True) or matplotlib (False)
python
# Full tearsheet (HTML)
mbt.plot.tearsheet(result, show=True)

# Individual charts
mbt.plot.equity(result, show=True)
mbt.plot.drawdown(result, show=True)
mbt.plot.monthly_returns(result, show=True)

# Save to file
mbt.plot.tearsheet(result, save="report.html")
mbt.plot.equity(result, save="equity.png")

# Monte Carlo with custom params
mbt.plot.monte_carlo(result, n_simulations=5000, method="bootstrap", show=True)

# Via Result methods
result.plot("tearsheet")
result.plot_equity()
result.plot_drawdown()

DataStore

The DataStore connects the engine to your local bar data and metadata database.

DataStore(
    data_root: str,
    metadata_db: str = "metadata/metadata.sqlite",
    dataset: str = "bars_1m",
)
ArgumentTypeDefaultDescription
data_rootstr--Root directory containing bar data (Parquet files)
metadata_dbstr"metadata/metadata.sqlite"Path to the SQLite metadata database (relative to data_root)
datasetstr"bars_1m"Dataset table name. Auto-resolved based on bar_interval in most cases
python
store = mbt.DataStore(
    data_root="data",
    metadata_db="metadata/metadata.sqlite",
)

# Or with explicit dataset override
store = mbt.DataStore(
    data_root="data",
    dataset="bars_15m",
)

Portfolio

The Portfolio builder combines multiple strategies with allocation weights and risk controls. Run with mbt.run_portfolio().

Portfolio Builder

MethodSignatureDescription
.strategy(s, w)strategy(strategy: Strategy, weight: float) -> PortfolioAdd a strategy with an allocation weight (weights are normalized)
.max_drawdown(pct)max_drawdown(pct: float) -> PortfolioHalt trading if portfolio drawdown exceeds this percentage
.max_gross_exposure(pct)max_gross_exposure(pct: float) -> PortfolioCap total gross exposure as a fraction of equity
.rebalance_periodic(n)rebalance_periodic(every_n_bars: int) -> PortfolioRebalance allocations every N bars
.rebalance_threshold(pct)rebalance_threshold(drift_pct: float) -> PortfolioRebalance when any weight drifts more than this percentage
.no_rebalance()no_rebalance() -> PortfolioDisable rebalancing (buy-and-hold weights)

run_portfolio()

run_portfolio(
    portfolio: Portfolio,
    config: BacktestConfig,
    store: DataStore,
) -> Result

Run a portfolio backtest. Returns a single Result with the combined equity curve, trades, and metrics.

python
# Build a portfolio of two strategies
portfolio = (
    mbt.Portfolio()
    .strategy(momentum_strat, weight=0.6)
    .strategy(mean_rev_strat, weight=0.4)
    .max_drawdown(15.0)
    .max_gross_exposure(1.5)
    .rebalance_periodic(every_n_bars=1440)  # daily at 1m bars
)

result = mbt.run_portfolio(portfolio, config, store)
print(result.summary())
mbt.plot.tearsheet(result, show=True)

Best Practices

01
Use signal_delay=1.Delay 0 introduces look-ahead bias. Only use it when you explicitly account for it in your data pipeline.
02
Set warmup_bars.Set it to at least the longest indicator window (e.g. 60 for zscore(60)) to avoid NaN-dominated early signals.
03
Use mbt.when() for sizing.It auto-coerces numbers and supports NaN (hold) as the default false branch. Cleaner than manual if/else arithmetic.
04
Run diagnostics before trusting results.Call detect_lookahead() and check_exposure_stability() on every new strategy. Call risk_check() on the result.
05
Start with hours(12) for fast iteration.Larger bar intervals load less data and simulate faster. Switch to minutes(1) only for final validation.
06
Use accuracy=True for final validation.When your strategy uses stop-loss or take-profit, accuracy mode runs the simulation on 1-minute bars for precise fill detection.
07
Use run_sweep_lite for large grids.It skips trade logging and Arrow output construction. For 100k+ parameter combos, it's an order of magnitude faster than run_sweep.
08
Be conservative with fees.Use taker fees by default. Maker fees assume passive limit orders. Use FeeConfig.binance_perps() as a baseline.

Pro Activation

Manifold-BT works out of the box as Community edition. Pro unlocks additional features:

FeatureCommunityPro
Output resolutionDailySub-daily (1m, 5m, 15m, 1h)
Monte Carlo simulations1,000Unlimited
Walk-forward analysisAnchored & Rolling
Parameter stabilityFull
Crypto connectors (Binance, Hyperliquid)YesYes
Databento & Massive connectorsYes
Safety checks (lookahead, exposure)Yes
Tearsheet exportHTML & PDF

Activate

After purchasing a license at manifoldbt.com, activate with your key. The recommended way is the CLI, no key in your source code:

bash
manifoldbt activate "your-license-key-here"

You can also activate from Python:

python
import manifoldbt as mbt

mbt.activate("your-license-key-here")
print(mbt.license_info())
# ("Pro", "you@email.com")

The license key is saved locally. You only need to activate once per machine, subsequent imports load the stored key automatically.

License file

The key is stored at:

OSPath
Windows%LOCALAPPDATA%\manifoldbt\license\license.key
macOS~/Library/Application Support/manifoldbt/license/license.key
Linux~/.local/share/manifoldbt/license/license.key

To deactivate, delete the file:

bash
# macOS / Linux
rm ~/.local/share/manifoldbt/license/license.key

# Windows (PowerShell)
Remove-Item "$env:LOCALAPPDATA\manifoldbt\license\license.key"

Verify

python
tier, email = mbt.license_info()
print(tier)   # "Pro" or "Community"
print(email)  # your email or None

Pro features are gated at runtime. If a Community user calls a Pro-only function (e.g. run_walk_forward()), a LicenseError is raised with a clear message.