Loading [MathJax]/extensions/tex2jax.js
scipy 1.10.1 Pypi GitHub Homepage
Other Docs

NotesParametersRaisesReturns
anderson_ksamp(samples, midrank=True)

The k-sample Anderson-Darling test is a modification of the one-sample Anderson-Darling test. It tests the null hypothesis that k-samples are drawn from the same population without having to specify the distribution function of that population. The critical values depend on the number of samples.

Notes

defines three versions of the k-sample Anderson-Darling test: one for continuous distributions and two for discrete distributions, in which ties between samples may occur. The default of this routine is to compute the version based on the midrank empirical distribution function. This test is applicable to continuous and discrete data. If midrank is set to False, the right side empirical distribution is used for a test for discrete data. According to , the two discrete test statistics differ only slightly if a few collisions due to round-off errors occur in the test not adjusted for ties between samples.

The critical values corresponding to the significance levels from 0.01 to 0.25 are taken from . p-values are floored / capped at 0.1% / 25%. Since the range of critical values might be extended in future releases, it is recommended not to test p == 0.25, but rather p >= 0.25 (analogously for the lower bound).

Parameters

samples : sequence of 1-D array_like

Array of sample data in arrays.

midrank : bool, optional

Type of Anderson-Darling test which is computed. Default (True) is the midrank test applicable to continuous and discrete populations. If False, the right side empirical distribution is used.

Raises

ValueError

If less than 2 samples are provided, a sample is empty, or no distinct observations are in the samples.

Returns

res : Anderson_ksampResult

An object containing attributes:

statistic

statistic

critical_values

critical_values

pvalue

pvalue

The Anderson-Darling test for k-samples.

See Also

anderson

1 sample Anderson-Darling test

ks_2samp

2 sample Kolmogorov-Smirnov test

Examples

import numpy as np
from scipy import stats
rng = np.random.default_rng()
res = stats.anderson_ksamp([rng.normal(size=50),
rng.normal(loc=0.5, size=30)])
res.statistic, res.pvalue
res.critical_values
The null hypothesis that the two random samples come from the same distribution can be rejected at the 5% level because the returned test value is greater than the critical value for 5% (1.961) but not at the 2.5% level. The interpolation gives an approximate p-value of 4.99%.
res = stats.anderson_ksamp([rng.normal(size=50),
rng.normal(size=30), rng.normal(size=20)])
res.statistic, res.pvalue
res.critical_values
The null hypothesis cannot be rejected for three samples from an identical distribution. The reported p-value (25%) has been capped and may not be very accurate (since it corresponds to the value 0.449 whereas the statistic is -0.291).
See :

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

ks_2sampks_2sampandersonanderson

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


GitHub : /scipy/stats/_morestats.py#2093
type: <class 'function'>
Commit: