Chapter 12 — Monte Carlo Methods

"Monte Carlo is the method of last resort that consistently saves your career."


After this chapter you will be able to:

  • Implement a basic Monte Carlo pricer and compute standard errors and confidence intervals
  • Apply antithetic variates and control variates, and quantify the variance reduction using $1 - \rho_{fg}^2$
  • Use Sobol quasi-random sequences and understand when low-discrepancy sequences outperform pseudo-random
  • Price path-dependent options (Asian, barrier) via Monte Carlo
  • Implement Longstaff-Schwartz for American option pricing

In the 1940s, physicists working on the Manhattan Project needed to simulate neutron diffusion through fissile material — a problem with so many interacting particles that no analytical solution existed. Stanislaw Ulam, recovering from an illness and playing solitaire, had an idea: instead of tracking individual particles deterministically, simulate many random paths and average the outcomes. He and John von Neumann named the method after the casino in Monaco. By the 1970s, financial engineers had imported it wholesale.

Monte Carlo pricing is conceptually simple: simulate many possible future states of the world, compute the option payoff in each one, and average them. The law of large numbers guarantees convergence. The central limit theorem tells us the error shrinks as $1/\sqrt{N}$, meaning to halve the error you need four times as many paths. At 10,000 paths you get roughly 1% accuracy; at 1,000,000 paths, about 0.1%. For a standard European option, this is unnecessarily slow — Black-Scholes gives the exact answer in microseconds. The real power of Monte Carlo emerges where closed forms do not exist: path-dependent options (barriers, Asians, lookbacks), early-exercise options (American and Bermudan), options on baskets of correlated assets, and products with complex payoff structures.

This chapter builds a complete Monte Carlo library in OCaml. We start with the basic estimator, then develop variance reduction techniques — antithetic variates, control variates, and quasi-random sequences — that can reduce variance by 10x to 100x without adding paths. We price Asian and lookback options to illustrate path dependency, implement the Longstaff-Schwartz algorithm for American options, and exploit OCaml 5's Domain module for near-linear parallel speedup.


12.1 Foundations of Monte Carlo Simulation

The core idea is the law of large numbers applied to option pricing. Under the risk-neutral measure $\mathbb{Q}$, the price of any derivative equals its discounted expected payoff:

$$V_0 = e^{-rT} \frac{1}{N} \sum_{i=1}^N f(S^{(i)}_T)$$

The standard error of the estimate is $\text{SE} = \sigma_f / \sqrt{N}$, giving $O(1/\sqrt{N})$ convergence — slow but dimension-independent.

Monte Carlo Convergence Figure 12.1 — Convergence of Monte Carlo valuation for a European call option. The estimate oscillates around the exact Black-Scholes price but gradually narrows as $1/\sqrt{N}$, with the 95% confidence interval cleanly containing the true price.

module Mc = struct

  type result = {
    price     : float;
    std_error : float;
    conf_lo   : float;     (* 95% CI lower bound *)
    conf_hi   : float;
    n_paths   : int;
  }

  let pp_result r =
    Printf.printf "Price: %.6f  SE: %.6f  95%% CI: [%.6f, %.6f]  (N=%d)\n"
      r.price r.std_error r.conf_lo r.conf_hi r.n_paths

  (** Box-Muller standard normal sample *)
  let std_normal () =
    let u1 = Random.float 1.0 and u2 = Random.float 1.0 in
    sqrt (-. 2.0 *. log u1) *. cos (2.0 *. Float.pi *. u2)

  (**
      Basic GBM terminal price simulation.
      S_T = S_0 * exp((r - q - σ²/2)T + σ√T Z), Z ~ N(0,1)
  *)
  let gbm_terminal ~spot ~rate ~div_yield ~vol ~tau () =
    let z    = std_normal () in
    let drift = (rate -. div_yield -. 0.5 *. vol *. vol) *. tau in
    spot *. exp (drift +. vol *. sqrt tau *. z)

  (**
      Price European option with plain Monte Carlo.
  *)
  let european_mc ~spot ~strike ~rate ?(div_yield = 0.0) ~vol ~tau ~n_paths
                  ~option_type () =
    let df       = exp (-. rate *. tau) in
    let payoffs  = Array.init n_paths (fun _ ->
      let st = gbm_terminal ~spot ~rate ~div_yield ~vol ~tau () in
      match option_type with
      | `Call -> Float.max 0.0 (st -. strike)
      | `Put  -> Float.max 0.0 (strike -. st)
    ) in
    let mean   = Array.fold_left (+.) 0.0 payoffs /. float_of_int n_paths in
    let var    = Array.fold_left (fun acc x ->
                   acc +. (x -. mean) *. (x -. mean)) 0.0 payoffs
                 /. float_of_int (n_paths - 1) in
    let se     = df *. sqrt (var /. float_of_int n_paths) in
    let price  = df *. mean in
    { price; std_error = se;
      conf_lo = price -. 1.96 *. se;
      conf_hi = price +. 1.96 *. se;
      n_paths }

end

The Mc.result record captures not just the price but the statistical uncertainty around it. The 95% confidence interval $[\text{price} \pm 1.96 \times \text{SE}]$ is as important as the price itself: a MC result reported without its error bar is meaningless. For a practical example, pricing an ATM European call ($S=K=100$, $r=5%$, $\sigma=20%$, $T=1$) with $N=10{,}000$ paths typically gives something like Price: 10.452 SE: 0.104 95% CI: [10.247, 10.656]. The true Black-Scholes price is 10.450 — well within the confidence interval, as it should be about 95% of the time.


12.2 Variance Reduction Techniques

The greatest leverage in Monte Carlo is not more paths — it is smarter sampling. Variance reduction techniques exploit structure in the problem to reduce $\sigma_f$ (the standard deviation of the payoff distribution), achieving the same accuracy with far fewer paths. The three most important techniques are antithetic variates, control variates, and quasi-random (low-discrepancy) sequences.

Variance Reduction Comparison Figure 12.2 — Distribution of Monte Carlo estimates over 100 independent trials using crude MC, antithetic variates, and control variates. Variance reduction tightens the distribution tightly around the true theoretical price.

12.2.1 Antithetic Variates

For every random draw $Z \sim N(0,1)$, also use its negative $-Z$. The two terminal prices are negatively correlated (when $S_T^+$ is high because $Z$ was large and positive, $S_T^-$ is low), and averaging their payoffs cancels some of the variance. The variance reduction ratio is $1 - \text{Corr}(f(S^+), f(S^-))/2$. For a convex payoff like a call, the two paths are negatively correlated but the payoffs are not perfectly anti-correlated (both can be in-the-money if the stock drifts up strongly), so the reduction is less than the ideal 50%. Empirically, antithetic variates reduce variance by 30–60% for standard European options with negligible computation overhead.

let european_antithetic ~spot ~strike ~rate ?(div_yield = 0.0) ~vol ~tau ~n_pairs
                        ~option_type () =
  let df     = exp (-. rate *. tau) in
  let drift  = (rate -. div_yield -. 0.5 *. vol *. vol) *. tau in
  let sigma_sqrt_t = vol *. sqrt tau in
  let payoffs = Array.init n_pairs (fun _ ->
    let z  = Mc.std_normal () in
    let s1 = spot *. exp (drift +. sigma_sqrt_t *. z) in
    let s2 = spot *. exp (drift -. sigma_sqrt_t *. z) in
    let pf  = match option_type with
      | `Call -> (Float.max 0.0 (s1 -. strike) +. Float.max 0.0 (s2 -. strike)) /. 2.0
      | `Put  -> (Float.max 0.0 (strike -. s1) +. Float.max 0.0 (strike -. s2)) /. 2.0
    in pf
  ) in
  let mean  = Array.fold_left (+.) 0.0 payoffs /. float_of_int n_pairs in
  let var   = Array.fold_left (fun a x -> a +. (x -. mean) *. (x -. mean)) 0.0 payoffs
              /. float_of_int (n_pairs - 1) in
  let se    = df *. sqrt (var /. float_of_int n_pairs) in
  let price = df *. mean in
  Mc.{ price; std_error = se; conf_lo = price -. 1.96 *. se;
       conf_hi = price +. 1.96 *. se; n_paths = 2 * n_pairs }

12.2.2 Control Variates

The control variate technique is the most powerful variance reduction method available when an analytically tractable related problem exists. The mathematics is as follows. Let $f$ be the payoff of interest (e.g., arithmetic Asian call) and $g$ be a related payoff with known exact value $\mu_g = E[g]$ (e.g., European call, which has a Black-Scholes closed form). We estimate $E[f]$ not as the sample mean $\bar{f}$, but as the adjusted estimator:

$$\hat{V}_\text{CV} = e^{-rT}\left[\bar{f} - \beta(\bar{g} - \mu_g)\right]$$

The optimal coefficient minimising the variance of $\hat{V}\text{CV}$ is: $$\beta^* = \frac{\text{Cov}(f, g)}{\text{Var}(g)} = \rho{fg} \cdot \frac{\sigma_f}{\sigma_g}$$

With this optimal coefficient, the variance of the adjusted estimator is: $$\text{Var}(\hat{V}\text{CV}) = \text{Var}(\bar{f}) \cdot (1 - \rho{fg}^2)$$

The variance reduction factor is $(1 - \rho_{fg}^2)$. If $f$ and $g$ are 90% correlated, variance is reduced by $1 - 0.81 = 81%$ — equivalent to running $1/0.19 \approx 5$ times more paths. If correlation is 99%, the reduction is $1 - 0.98 = 98%$ — equivalent to running 50 times more paths. This is why the control variate is so powerful: the better the control correlates with the target, the more dramatic the variance reduction.

In practice, $\beta^*$ is estimated from the same simulation paths using the sample covariance. The estimation error introduces a small bias of order $O(1/N)$, negligible for $N \geq 1000$. The control and target payoffs must be computed on the same paths (same random draws), so correlation structure is preserved.

For the arithmetic Asian call with geometric Asian control, empirical correlation between the two payoffs is typically 0.97–0.99 for near-ATM options, giving 94–98% variance reduction — around 20–50 times more path-efficiency. This explains why the geometric control variate is the industry standard for Asian option pricing.

Use Black-Scholes (which we know exactly) as control. For a path-dependent option with payoff $f$:

$$\hat{V} = e^{-rT}\left[\bar{f} - \beta(\bar{g} - g_{\text{exact}})\right]$$

where $g$ is the European call payoff and $\beta$ is the optimal coefficient $\beta = \text{Cov}(f, g) / \text{Var}(g)$.

let control_variate_call ~spot ~strike_exotic ~strike_control ~rate ~vol ~tau ~n_paths ~exotic_payoff () =
  let df         = exp (-. rate *. tau) in
  let drift      = (rate -. 0.5 *. vol *. vol) *. tau in
  let sigma_t    = vol *. sqrt tau in
  let ctrl_exact = Black_scholes.call ~spot ~strike:strike_control ~rate ~vol ~tau in
  let n          = float_of_int n_paths in
  let f_vals     = Array.make n_paths 0.0 in
  let g_vals     = Array.make n_paths 0.0 in
  for i = 0 to n_paths - 1 do
    let z  = Mc.std_normal () in
    let st = spot *. exp (drift +. sigma_t *. z) in
    f_vals.(i) <- exotic_payoff st;
    g_vals.(i) <- Float.max 0.0 (st -. strike_control)
  done;
  let f_mean = Array.fold_left (+.) 0.0 f_vals /. n in
  let g_mean = Array.fold_left (+.) 0.0 g_vals /. n in
  let cov_fg = Array.fold_left (fun a i -> a +. (f_vals.(i) -. f_mean) *. (g_vals.(i) -. g_mean))
                 0.0 (Array.init n_paths Fun.id) /. (n -. 1.0) in
  let var_g  = Array.fold_left (fun a x -> a +. (x -. g_mean) *. (x -. g_mean))
                 0.0 g_vals /. (n -. 1.0) in
  let beta   = cov_fg /. var_g in
  let adj    = Array.mapi (fun i fi -> fi -. beta *. (g_vals.(i) -. ctrl_exact /. df)) f_vals in
  let mean   = Array.fold_left (+.) 0.0 adj /. n in
  df *. mean

12.3 Quasi-Random (Low-Discrepancy) Sequences

Pseudo-random sequences have $O(1/\sqrt{N})$ convergence. Sobol, Halton, and other quasi-random sequences achieve near $O((\log N)^d / N)$ convergence in $d$ dimensions.

module Sobol = struct
  (** Simplified 1D Sobol sequence (direction numbers from Joe & Kuo 2010).
      For production, use a full table of direction numbers for all dimensions. *)

  let direction_numbers = [| 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
                              1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
                              1; 1 |]   (* all 1s → Gray code scrambled *)

  let bits = 32

  let next_sobol state =
    let c = ref 0 in
    let x = ref !state in
    while !x land 1 = 1 do incr c; x := !x lsr 1 done;
    state := !state lxor (direction_numbers.(!c) lsl (bits - !c - 1));
    float_of_int !state /. float_of_int (1 lsl bits)

  let make_sequence n =
    let state = ref 0 in
    Array.init n (fun _ -> next_sobol state)

  (** Normal quantile transform via Beasley-Springer-Moro *)
  let norm_ppf u =
    (* Approximation sufficient for quasi-MC *)
    let a0 = 2.50662823884 and a1 = -18.61500062529 and a2 = 41.39119773534
    and a3 = -25.44106049637 and b1 = -8.47351093090 and b2 = 23.08336743743
    and b3 = -21.06224101826 and b4 = 3.13082909833 in
    let c0 = 0.3374754822726147 and c1 = 0.9761690190917186
    and c2 = 0.1607979714918209 and c3 = 0.0276438810333863
    and c4 = 0.0038405729373609 and c5 = 0.0003951896511349
    and c6 = 0.0000321767881768 and c7 = 0.0000002888167364
    and c8 = 0.0000003960315187 in
    let y = u -. 0.5 in
    if Float.abs y < 0.42 then
      let r = y *. y in
      y *. (((a3 *. r +. a2) *. r +. a1) *. r +. a0)
         /. ((((b4 *. r +. b3) *. r +. b2) *. r +. b1) *. r +. 1.0)
    else
      let r = if y > 0.0 then 1.0 -. u else u in
      let r = log (-. log r) in
      let s = c0 +. r *. (c1 +. r *. (c2 +. r *. (c3 +. r *. (c4 +. r
              *. (c5 +. r *. (c6 +. r *. (c7 +. r *. c8))))))) in
      if y < 0.0 then -. s else s

end

12.4 Path-Dependent Options

12.4.1 Asian Options

Asian options depend on the average price over the path:

type asian_avg = Arithmetic | Geometric

let gbm_path ~spot ~rate ~div_yield ~vol ~tau ~n_steps () =
  let dt       = tau /. float_of_int n_steps in
  let drift    = (rate -. div_yield -. 0.5 *. vol *. vol) *. dt in
  let sigma_dt = vol *. sqrt dt in
  let path     = Array.make (n_steps + 1) spot in
  for i = 0 to n_steps - 1 do
    let z = Mc.std_normal () in
    path.(i + 1) <- path.(i) *. exp (drift +. sigma_dt *. z)
  done;
  path

let asian_option ~spot ~strike ~rate ~div_yield ~vol ~tau ~n_steps ~n_paths
                 ~option_type ~avg_type () =
  let df = exp (-. rate *. tau) in
  let payoffs = Array.init n_paths (fun _ ->
    let path = gbm_path ~spot ~rate ~div_yield ~vol ~tau ~n_steps () in
    let avg = match avg_type with
      | Arithmetic ->
        Array.fold_left (+.) 0.0 path /. float_of_int (Array.length path)
      | Geometric ->
        let sum_log = Array.fold_left (fun a s -> a +. log s) 0.0 path in
        exp (sum_log /. float_of_int (Array.length path))
    in
    match option_type with
    | `Call -> Float.max 0.0 (avg -. strike)
    | `Put  -> Float.max 0.0 (strike -. avg)
  ) in
  let mean = Array.fold_left (+.) 0.0 payoffs /. float_of_int n_paths in
  df *. mean

12.4.2 Lookback Options

let lookback_call_floating ~spot ~rate ~div_yield ~vol ~tau ~n_steps ~n_paths () =
  (* Floating strike: call payoff = S_T - S_min *)
  let df = exp (-. rate *. tau) in
  let payoffs = Array.init n_paths (fun _ ->
    let path = gbm_path ~spot ~rate ~div_yield ~vol ~tau ~n_steps () in
    let s_min = Array.fold_left Float.min Float.max_float path in
    let s_t   = path.(Array.length path - 1) in
    Float.max 0.0 (s_t -. s_min)
  ) in
  df *. Array.fold_left (+.) 0.0 payoffs /. float_of_int n_paths

let lookback_put_floating ~spot ~rate ~div_yield ~vol ~tau ~n_steps ~n_paths () =
  (* put payoff = S_max - S_T *)
  let df = exp (-. rate *. tau) in
  let payoffs = Array.init n_paths (fun _ ->
    let path = gbm_path ~spot ~rate ~div_yield ~vol ~tau ~n_steps () in
    let s_max = Array.fold_left Float.max (-. Float.max_float) path in
    let s_t   = path.(Array.length path - 1) in
    Float.max 0.0 (s_max -. s_t)
  ) in
  df *. Array.fold_left (+.) 0.0 payoffs /. float_of_int n_paths

12.5 Longstaff-Schwartz for American Options

The Longstaff-Schwartz algorithm uses least-squares regression at each time step to estimate the continuation value:

$$Q(\omega, t) = E^Q[e^{-r\Delta t} V(\omega, t+\Delta t) \mid \mathcal{F}_t]$$

Approximate $Q$ as a polynomial in $S_t$: regress future discounted payoffs on $1, S_t, S_t^2$.

module Longstaff_schwartz = struct

  (** Regress y on [1; x; x²] — returns [a0; a1; a2] via normal equations *)
  let ols_poly2 xs ys =
    let n   = float_of_int (Array.length xs) in
    let s1  = Array.fold_left (+.) 0.0 xs in
    let s2  = Array.fold_left (fun a x -> a +. x *. x) 0.0 xs in
    let s3  = Array.fold_left (fun a x -> a +. x *. x *. x) 0.0 xs in
    let s4  = Array.fold_left (fun a x -> a +. x *. x *. x *. x) 0.0 xs in
    let y0  = Array.fold_left (+.) 0.0 ys in
    let y1  = Array.fold_left2 (fun a x y -> a +. x *. y) 0.0 xs ys in
    let y2  = Array.fold_left2 (fun a x y -> a +. x *. x *. y) 0.0 xs ys in
    (* Solve 3x3 system: simplified (demo quality — use LAPACK in production) *)
    let m = [| [| n;  s1; s2 |]; [| s1; s2; s3 |]; [| s2; s3; s4 |] |] in
    let b = [| y0; y1; y2 |] in
    (* Gaussian elimination *)
    for i = 0 to 2 do
      let piv = m.(i).(i) in
      for j = i + 1 to 2 do
        let f = m.(j).(i) /. piv in
        for k = i to 2 do m.(j).(k) <- m.(j).(k) -. f *. m.(i).(k) done;
        b.(j) <- b.(j) -. f *. b.(i)
      done
    done;
    let x = Array.make 3 0.0 in
    for i = 2 downto 0 do
      x.(i) <- b.(i);
      for j = i + 1 to 2 do x.(i) <- x.(i) -. m.(i).(j) *. x.(j) done;
      x.(i) <- x.(i) /. m.(i).(i)
    done;
    x

  let eval_poly2 coeffs s = coeffs.(0) +. coeffs.(1) *. s +. coeffs.(2) *. s *. s

  let american_put ~spot ~strike ~rate ~vol ~tau ~n_steps ~n_paths () =
    let dt       = tau /. float_of_int n_steps in
    let drift    = (rate -. 0.5 *. vol *. vol) *. dt in
    let sigma_dt = vol *. sqrt dt in
    let df_step  = exp (-. rate *. dt) in

    (* Simulate all paths *)
    let paths = Array.init n_paths (fun _ ->
      let p = Array.make (n_steps + 1) spot in
      for i = 0 to n_steps - 1 do
        let z = Mc.std_normal () in
        p.(i + 1) <- p.(i) *. exp (drift +. sigma_dt *. z)
      done;
      p
    ) in

    (* Cashflow matrix — initially terminal payoff *)
    let cf = Array.map (fun path ->
      Float.max 0.0 (strike -. path.(n_steps))
    ) paths in

    (* Backward induction with LSM regression *)
    for step = n_steps - 1 downto 1 do
      let intrinsics = Array.map (fun path ->
        Float.max 0.0 (strike -. path.(step))
      ) paths in
      (* Only regress on in-the-money paths *)
      let itm = Array.to_list (Array.init n_paths (fun i ->
        if intrinsics.(i) > 0.0 then Some i else None))
        |> List.filter_map Fun.id in

      if itm <> [] then begin
        let xs = Array.of_list (List.map (fun i -> paths.(i).(step)) itm) in
        let ys = Array.of_list (List.map (fun i -> df_step *. cf.(i)) itm) in
        let coeffs = ols_poly2 xs ys in
        List.iter (fun i ->
          let s    = paths.(i).(step) in
          let cont = eval_poly2 coeffs s in
          if intrinsics.(i) > cont then
            cf.(i) <- intrinsics.(i)   (* exercise early *)
        ) itm
      end;
      (* Discount remaining cashflows *)
      Array.iteri (fun i c -> cf.(i) <- df_step *. c) cf;
      ignore step
    done;

    let mean = Array.fold_left (+.) 0.0 cf /. float_of_int n_paths in
    mean

end

12.6 Parallelism with OCaml 5 Domains

OCaml 5 introduces true parallelism via Domains. Monte Carlo is embarrassingly parallel:

module Parallel_mc = struct

  let n_domains = Domain.recommended_domain_count ()

  (** Parallel European MC using OCaml 5 domains *)
  let european_parallel ~spot ~strike ~rate ?(div_yield = 0.0) ~vol ~tau
                        ~total_paths ~option_type () =
    let paths_per_domain = total_paths / n_domains in
    let df       = exp (-. rate *. tau) in
    let drift    = (rate -. div_yield -. 0.5 *. vol *. vol) *. tau in
    let sigma_t  = vol *. sqrt tau in

    let worker domain_id =
      ignore domain_id;
      (* Each domain has its own RNG state to avoid false sharing *)
      Random.self_init ();
      let sum = ref 0.0 in
      for _ = 1 to paths_per_domain do
        let z  = Mc.std_normal () in
        let st = spot *. exp (drift +. sigma_t *. z) in
        let pf = match option_type with
          | `Call -> Float.max 0.0 (st -. strike)
          | `Put  -> Float.max 0.0 (strike -. st)
        in
        sum := !sum +. pf
      done;
      !sum
    in

    let domains = Array.init n_domains (fun id ->
      Domain.spawn (fun () -> worker id)
    ) in

    let total_sum = Array.fold_left (fun acc d ->
      acc +. Domain.join d
    ) 0.0 domains in

    df *. total_sum /. float_of_int total_paths

end

With 8 cores, parallel MC achieves ~7× speedup, equivalent to needing $\sqrt{7}$ fewer paths.


12.7 Chapter Summary

Monte Carlo is the universal pricing engine for complex derivatives. Its great strength is that it requires almost no mathematical analysis of the payoff — any payoff function that can be computed given a simulated path can be priced. Its great weakness is slow convergence: standard MC needs roughly 100x more paths to achieve 10x better accuracy.

Variance reduction transforms this tradeoff dramatically. Antithetic variates are essentially free — a trivial change to the simulation loop that buys 30–60% variance reduction. Control variates are more implementation work but can achieve 5-50x variance reduction when a good control (such as a European option, priced exactly by Black-Scholes) is available. Quasi-random sequences (Sobol, Halton) improve the convergence rate from $O(N^{-1/2})$ toward $O(N^{-1})$ in low dimensions, offering another order of magnitude of improvement. In practice, a well-implemented quasi-MC with antithetic sampling can achieve the accuracy of 1,000,000 random paths with only 10,000–50,000 quasi-random paths.

For path-dependent options, the full simulation trajectory is necessary. Asian options are relatively easy — average the path and apply the payoff. Lookback options require tracking the running extremum. Barrier options require monitoring every time step for a crossing event, which introduces discretisation bias (the simulated process cannot cross the barrier between grid points) that must be corrected using continuity corrections.

American options are where Monte Carlo shows its most sophisticated side. Longstaff-Schwartz inverts the usual backward-induction logic: it simulates paths forward, then runs a backward regression to estimate the continuation value at each exercise date. The algorithm is elegant, but requires care in choosing basis functions, handling in-the-money paths correctly, and managing the bias introduced by finite-sample regression.

OCaml 5's Domain module makes parallel MC almost effortless: spawn independent domains with independent RNG states, join their results. The embarrassingly parallel nature of MC means near-linear scaling with core count, making it one of the most natural uses of modern multi-core processors.


Exercises

12.1 Compare plain MC, antithetic, and quasi-MC (Sobol) for an ATM European call. Plot $\log(\text{error})$ vs $\log(N)$ and measure the convergence rate exponent.

12.2 Implement a barrier option (down-and-out call) using MC with daily time steps. Validate against the closed-form formula.

12.3 Implement the Longstaff-Schwartz algorithm with Laguerre polynomials as basis functions (literature standard) instead of monomials. Compare convergence.

12.4 Parallelise the Longstaff-Schwartz pricer using OCaml 5 Domains for the path generation phase while keeping the regression step single-threaded.


Next: Chapter 13 — Volatility