Loading [MathJax]/extensions/tex2jax.js
scipy 1.10.1 Pypi GitHub Homepage
Other Docs

NotesParametersReturnsBackRef
ansari(x, y, alternative='two-sided')

The Ansari-Bradley test (, ) is a non-parametric test for the equality of the scale parameter of the distributions from which two samples were drawn. The null hypothesis states that the ratio of the scale of the distribution underlying x to the scale of the distribution underlying y is 1.

Notes

The p-value given is exact when the sample sizes are both less than 55 and there are no ties, otherwise a normal approximation for the p-value is used.

Parameters

x, y : array_like

Arrays of sample data.

alternative : {'two-sided', 'less', 'greater'}, optional

Defines the alternative hypothesis. Default is 'two-sided'. The following options are available:

  • 'two-sided': the ratio of scales is not equal to 1.
  • 'less': the ratio of scales is less than 1.
  • 'greater': the ratio of scales is greater than 1.

Returns

statistic : float

The Ansari-Bradley test statistic.

pvalue : float

The p-value of the hypothesis test.

Perform the Ansari-Bradley test for equal scale parameters.

See Also

fligner

A non-parametric test for the equality of k variances

mood

A non-parametric test for the equality of two scale parameters

Examples

import numpy as np
from scipy.stats import ansari
rng = np.random.default_rng()
For these examples, we'll create three random data sets. The first two, with sizes 35 and 25, are drawn from a normal distribution with mean 0 and standard deviation 2. The third data set has size 25 and is drawn from a normal distribution with standard deviation 1.25.
x1 = rng.normal(loc=0, scale=2, size=35)
x2 = rng.normal(loc=0, scale=2, size=25)
x3 = rng.normal(loc=0, scale=1.25, size=25)
First we apply `ansari` to `x1` and `x2`. These samples are drawn from the same distribution, so we expect the Ansari-Bradley test should not lead us to conclude that the scales of the distributions are different.
ansari(x1, x2)
With a p-value close to 1, we cannot conclude that there is a significant difference in the scales (as expected).
Now apply the test to `x1` and `x3`:
ansari(x1, x3)
The probability of observing such an extreme value of the statistic under the null hypothesis of equal scales is only 0.03087%. We take this as evidence against the null hypothesis in favor of the alternative: the scales of the distributions from which the samples were drawn are not equal.
We can use the `alternative` parameter to perform a one-tailed test. In the above example, the scale of `x1` is greater than `x3` and so the ratio of scales of `x1` and `x3` is greater than 1. This means that the p-value when ``alternative='greater'`` should be near 0 and hence we should be able to reject the null hypothesis:
ansari(x1, x3, alternative='greater')
As we can see, the p-value is indeed quite low. Use of ``alternative='less'`` should thus yield a large p-value:
ansari(x1, x3, alternative='less')
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

scipy.stats._morestats:mood

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

moodmoodscipy.stats._morestats:mood_morestats:moodscipy.stats._morestats:fligner_morestats:fligner

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


GitHub : /scipy/stats/_morestats.py#2319
type: <class 'function'>
Commit: