# How good is the generated random number?

- Problem – Random number generator softwares, generate series of numbers that appear random, but might not actually be random ( may be a result of a series of calculation & are replicable after some iterations).
- This problem can be addressed by using, Pseudorandom number generators, which imitate random behavior well, and also satisfy the major laws of probability
- A pseudorandom generator is considered good, if
- The numbers in the generated sequence are uniformly distributed between 0 and 1.
- The sequence has a long cycle ( i.e. it replicates itself after many cycles)
- The numbers in the sequence are not autocorrelated (verify through Durbin Watson test)

*Problem – A truly random generation sequence can have clustering problem, and may leave gaps in the unit interval [0,1]. May need many scenarios to obtain a good representation of the output distribution.*- This problem can be addressed by using Quasirandom number generators, which fills in the gaps on the unit interval [0,1]. Ex: Faure & Sobol sequences.

**Stratified Sampling**

- A variance reduction technique, where the range of random variable is divided into small ranges or strata, and random numbers are generated from each stratum.
- Addresses the problem of “clustering observation”, and provides a good representation of probability distribution
- Helps include extreme observation into the simulation (tails)
- Example: Latin Hypercube Sampling

The measurement error in VAR, due to sampling variation, should be greater with:

More observations and a high confidence level (e.g., 99%)

Fewer observations and a high confidence level

More observations and a low confidence level (e.g., 95%)

Fewer observations and a low confidence level

**B.**

Sampling variability (or imprecision) increases with (1) fewer observations and (2) greater confidence levels. To show (1), we can refer to the formula for the precision of the sample mean, which varies inversely with the square root of the number of data points. A similar reasoning applies to (2). A greater confidence level involves fewer observations in the left tails, from which VAR is computed.

Consider a stock that pays no dividends, has a volatility of 25% per annum and an expected return of 13% per annum. Suppose that the current share price of the stock, S_{0}, is USD 30. You decide to model the stock price behavior using a discrete-time version of geometric Brownian motion and to simulate paths of the stock price using Monte Carlo simulation. Let Δt denote the time interval used and let St denote the stock price at time interval t. So, according to your model, s_{t+1} = s_{t} (1 + 0.13 Δt + 0.25 √Δt ε) where Δ is a standard normal variable. To implement this simulation, you generate a path of the stock price by starting at t = 0, generating a sample for ε, updating the stock price according to the model, incrementing t by 1, and repeating this process until the end of the horizon is reached. Which of the following strategies for generating a sample for Δ will implement this simulation properly?

- Generate a sample for ε by using the inverse of the standard normal cumulative distribution of a sample value drawn from a uniform distribution between 0 and 1.
- Generate a sample for ε by sampling from a normal distribution with mean 0.13 and standard deviation 0.25.
- Generate a sample for ε by using the inverse of the standard normal cumulative distribution of a sample value drawn from a uniform distribution between 0 and 1. Use Cholesky decomposition to correlate this sample with the sample from the previous time interval.
- Generate a sample for ε by sampling from a normal distribution with mean 0.13 and standard deviation 0.25. Use Cholesky decomposition to correlate this sample with the sample from the previous time interval.

A.

Monte Carlo Simulation assumes independence across time so there is no need to correlate samples from time period to time period, eliminating C and D. Choice A describes a valid method for generating a sample from a standard normal distribution.