What Converge in prob is
In probability theory, convergence in probability is a type of convergence that occurs when the probability of a sequence of events converges to a given limit. It is a weaker form of convergence than almost sure convergence or convergence in distribution.
The steps for convergence in probability are as follows:
- Define a sequence of random variables {Xn} for n = 1,2,…, where each random variable is a function of the sample.
- Define a limit l as the limit of the sequence.
- Calculate the probability of each random variable (Xn) approaching l as n becomes large.
- If the probability of each random variable converging to l is 1, then the sequence converges in probability to l.
Examples
-
Bayesian estimation of the mean of a normal distribution: the posterior distribution converges in probability to the true mean as the sample size increases.
-
Estimating the variance of a normal distribution using the sample variance: the estimator converges in probability to the true variance as the sample size increases.
-
Estimating the parameters of a linear regression model using least squares: the estimator converges in probability to the true parameters as the sample size increases.