Loading [MathJax]/extensions/tex2jax.js
scipy 1.10.1 Pypi GitHub Homepage
Other Docs

ParametersReturns
monte_carlo_test(sample, rvs, statistic, *, vectorized=None, n_resamples=9999, batch=None, alternative='two-sided', axis=0)

The null hypothesis is that the provided sample was drawn at random from the distribution for which rvs generates random variates. The value of the statistic for the given sample is compared against a Monte Carlo null distribution: the value of the statistic for each of n_resamples samples generated by rvs. This gives the p-value, the probability of observing such an extreme value of the test statistic under the null hypothesis.

Parameters

sample : array-like

An array of observations.

rvs : callable

Generates random variates from the distribution against which sample will be tested. rvs must be a callable that accepts keyword argument size (e.g. rvs(size=(m, n))) and returns an N-d array sample of that shape.

statistic : callable

Statistic for which the p-value of the hypothesis test is to be calculated. statistic must be a callable that accepts a sample (e.g. statistic(sample)) and returns the resulting statistic. If vectorized is set True, statistic must also accept a keyword argument axis and be vectorized to compute the statistic along the provided axis of the sample array.

vectorized : bool, optional

If vectorized is set False, statistic will not be passed keyword argument axis and is expected to calculate the statistic only for 1D samples. If True, statistic will be passed keyword argument axis and is expected to calculate the statistic along axis when passed an ND sample array. If None (default), vectorized will be set True if axis is a parameter of statistic. Use of a vectorized statistic typically reduces computation time.

n_resamples : int, default: 9999

Number of random permutations used to approximate the Monte Carlo null distribution.

batch : int, optional

The number of permutations to process in each call to statistic. Memory usage is O(batch'*''sample.size[axis]'). Default is None, in which case batch equals n_resamples.

alternative : {'two-sided', 'less', 'greater'}

The alternative hypothesis for which the p-value is calculated. For each alternative, the p-value is defined as follows.

axis : int, default: 0

The axis of sample over which to calculate the statistic.

Returns

statistic : float or ndarray

The observed test statistic of the sample.

pvalue : float or ndarray

The p-value for the given alternative.

null_distribution : ndarray

The values of the test statistic generated under the null hypothesis.

Monte Carlo test that a sample is drawn from a given distribution.

Examples

Suppose we wish to test whether a small sample has been drawn from a normal distribution. We decide that we will use the skew of the sample as a test statistic, and we will consider a p-value of 0.05 to be statistically significant.
import numpy as np
from scipy import stats
def statistic(x, axis):
    return stats.skew(x, axis)
After collecting our data, we calculate the observed value of the test statistic.
rng = np.random.default_rng()
x = stats.skewnorm.rvs(a=1, size=50, random_state=rng)
statistic(x, axis=0)
To determine the probability of observing such an extreme value of the skewness by chance if the sample were drawn from the normal distribution, we can perform a Monte Carlo hypothesis test. The test will draw many samples at random from their normal distribution, calculate the skewness of each sample, and compare our original skewness against this distribution to determine an approximate p-value.
from scipy.stats import monte_carlo_test
# because our statistic is vectorized, we pass `vectorized=True`
rvs = lambda size: stats.norm.rvs(size=size, random_state=rng)
res = monte_carlo_test(x, rvs, statistic, vectorized=True)
print(res.statistic)
print(res.pvalue)
The probability of obtaining a test statistic less than or equal to the observed value under the null hypothesis is ~70%. This is greater than our chosen threshold of 5%, so we cannot consider this to be significant evidence against the null hypothesis.
Note that this p-value essentially matches that of `scipy.stats.skewtest`, which relies on an asymptotic distribution of a test statistic based on the sample skewness.
stats.skewtest(x).pvalue
This asymptotic approximation is not valid for small sample sizes, but `monte_carlo_test` can be used with samples of any size.
x = stats.skewnorm.rvs(a=1, size=7, random_state=rng)
# stats.skewtest(x) would produce an error due to small sample
res = monte_carlo_test(x, rvs, statistic, vectorized=True)
The Monte Carlo distribution of the test statistic is provided for further investigation.
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.hist(res.null_distribution, bins=50)
ax.set_title("Monte Carlo distribution of test statistic")
ax.set_xlabel("Value of Statistic")
ax.set_ylabel("Frequency")
plt.show()
See :

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


GitHub : /scipy/stats/_resampling.py#669
type: <class 'function'>
Commit: