Value-at-Risk (VaR): Non-Parametric Approaches

Bruce Haydon
5 min readMay 27, 2022

The key difference between non-parametric and parametric (e.g.: delta-normal, lognormal) approaches is that with non-parametric approaches the underlying distribution is not specified or assumed. The data drives the analysis, not the distribution assumptions. Historical simulation is limited by the discreteness of the data, but non-parametric analysis techniques allow the “smoothing” of data points to turn a discrete distribution into a quasi-continuous one, and allow for any VaR confidence level between observations.

Bootstrap Historical Simulation (BHS) Approach

The bootstrap historical simulation is a simple and intuitive estimation procedure. In essence, the bootstrap technique

  1. draws a sample from the original data set,
  2. records the VaR from that particular sample and “returns” the data.
  3. This procedure is repeated over and over and records multiple sample VaRs.

Since the data is always “returned” to the data set, this procedure is akin to sampling with replacement. The best VaR estimate from the full data set is the average of all sample VaRs.

This same procedure can be performed to estimate the expected shortfall (ES). Each drawn sample will calculate its own ES by slicing the tail region into n slices and averaging the VaRs at each of the (n − 1) quantiles. Similar to the VaR calculation above, the best estimate of the expected shortfall for the original data set is the average of all of the sample expected shortfalls.

When compared to historical simulation on raw data alone, the bootstrapping technique consistently provides more precise estimates of coherent risk measures

Non-Parametric Estimation (Interpolation)

One of the advantages of non-parametric density estimation is that the underlying distribution is free from restrictive distribution assumptions. Therefore, the existing data points can be used to “smooth” the data points to allow for VaR calculation at all confidence levels through an interpolation process.

The simplest adjustment is to connect the midpoints between successive histogram bars in the original data set’s distribution. See Figure 2.1 for an illustration of this surrogate density function.

Notice that we still have a probability distribution function, just with a modified shape. The shaded area in the diagram below represents a possible confidence interval, which can be utilized regardless of the size of the data set. The major improvement of this non-parametric approach over the traditional historical simulation approach is that VaR can now be calculated for a continuum of points in the data set (a discrete distribution becomes continuous).

The linear adjustment is a simple solution to the interval problem. An enhancement to this technique would involve connecting curves, rather than lines, between successive datapoints to better model the interpolated data in between.

Weighted Historical Simulation — 4 Approaches

The default Historical Simulation discussed so far assumes that both current and past (arbitrary) n observations up to a specified cutoff point are used when computing the current period VaR. Older observations beyond the cutoff date are assumed to have a zero weight and the relevant “n” observations have equal weight of (1 / n).

While simple in concept, there are deficiencies with this method. For instance, why is the “nth” observation as important as all other observations, but the “(n + 1)th” observation is discounted in that it carries no weight? Current VaR may have “ghost effects” of previous events that remain in the computation until they disappear (after n periods).

Furthermore, this default Historical Simulation method assumes that each observation is independent and identically distributed (iid). This is a very strong assumption, which is likely violated in the sense that a good proportion of data contains clear seasonality behaviour(i.e., seasonal volatility). Below are four enhancements to the traditional historical simulation method.

To re-iterate, the default approach is “equal-weighting”, where every There are four approaches to weighting observations using the Historical Simulation approach:

  • Age-Weighted
  • Volatility-Weighted
  • Correlation-Weighted
  • Filtered

(a) Age Weighted Historical Simulation

The obvious adjustment to the equal-weighted assumption used in historical simulation is to weight recent observations more and distant observations less. One method (Boudoukh, Richardson, and Whitelaw) is as follows:
Assume w(1) is the probability weight for the observation that is one day old. Then w(2) can be defined as product of lambda and w(1) = λw(1), w(3) can be defined as λ^²w(1), and so on. The decay parameter, λ, can take on values 0 ≤ λ ≤ 1 where values close to 1 indicate slow decay. Since all of the weights must sum to 1, we conclude that w(1) = (1 − λ) / (1 − λn). More generally, the weight for an observation that is “I” days old is equal to:

One benefit of the age-weighted simulation is to reduce the impact of ghost effects and older events that may not reoccur. Note that this more general weighting scheme suggests that historical simulation is a special case where
λ = 1 (i.e., no decay) over the estimation window.

(b) Volatility-Weighted Historical Simulation

The second weighting approach is to weight the individual observations by volatility rather than proximity to the current date, as is the case with the “age weighted” method above. This method (Hull and White) was developed to incorporate changing volatility in risk estimation. The intuition behind this weighting methodology is as follows:

  • If recent volatility has increased, then using historical data will underestimate the current risk level.
  • Similarly, if current volatility has decreased materially, the impact of older data with higher periods of volatility will overstate the current risk level.

Thus, the volatility-adjusted return, r*(t,i) is replaced with a larger (smaller) expression if current volatility exceeds (is below) historical volatility on day i. Using this weighting scheme, VaR, ES, and any other coherent risk measure can be calculated in the usual way after substituting historical returns with volatility-adjusted returns.

Advantages of the volatility-weighted method: (1) it explicitly incorporates volatility into the estimation procedure in contrast to other historical methods, (2) the near-term VaR estimates are likely to be more sensible in light of current market conditions, and (3), the volatility-adjusted returns allow for VaR estimates that are higher than estimates with the historical data set.

DRAFT — RiskServ framework overview chapter v1.2a ( Bruce Haydon ACM Chapters (Buffalo/New York(NYC)/Toronto)/ HSB C CA 22 /

Copyright © 2020–2022 Bruce Haydon

--

--