Pasek boczny

en:statpqpl:korelpl:nparpl

Non-parametric tests

The monotonic correlation coefficients

The monotonic correlation may be described as monotonically increasing or monotonically decreasing. The relation between 2 features is presented by the monotonic increasing if the increasing of the one feature accompanies with the increasing of the other one. The relation between 2 features is presented by the monotonic decreasing if the increasing of the one feature accompanies with the decreasing of the other one.

The Spearman's rank-order correlation coefficient $r_s$ is used to describe the strength of monotonic relations between 2 features: $X$ and $Y$. It may be calculated on an ordinal scale or an interval one. The value of the Spearman's rank correlation coefficient should be calculated using the following formula:

\begin{displaymath} \label{rs}
r_s=1-\frac{6\sum_{i=1}^nd_i^2}{n(n^2-1)},
\end{displaymath}

where:

$d_i=R_{x_i}-R_{y_i}$ – difference of ranks for the feature $X$ and $Y$,

$n$ number of $d_i$.

This formula is modified when there are ties:

\begin{displaymath}
r_s=\frac{\Sigma_X+\Sigma_Y-\sum_{i=1}^nd_i^2}{2\sqrt{\Sigma_X\Sigma_Y}},
\end{displaymath}

where:

  • $\Sigma_X=\frac{n^3-n-T_X}{12}$, $\Sigma_Y=\frac{n^3-n-T_Y}{12}$,
  • $T_X=\sum_{i=1}^s (t_{i_{(X)}}^3-t_{i_{(X)}})$, $T_Y=\sum_{i=1}^s (t_{i_{(Y)}}^3-t_{i_{(Y)}})$,
  • $t$ – number of cases included in tie.

This correction is used, when ties occur. If there are no ties, the correction is not calculated, because the correction is reduced to the formula describing the above equation.

Note

$R_s$ – the Spearman's rank correlation coefficient in a population;

$r_s$ – the Spearman's rank correlation coefficient in a sample.

The value of $r_s\in<-1; 1>$, and it should be interpreted the following way:

  • $r_s\approx1$ means a strong positive monotonic correlation (increasing) – when the independent variable increases, the dependent variable increases too;
  • $r_s\approx-1$ means a strong negative monotonic correlation (decreasing) – when the independent variable increases, the dependent variable decreases;
  • if the Spearman's correlation coefficient is of the value equal or very close to zero, there is no monotonic dependence between the analysed features (but there might exist another relation - a non monotonic one, for example a sinusoidal relation).

The Kendall's tau correlation coefficient (Kendall (1938)1)) is used to describe the strength of monotonic relations between features . It may be calculated on an ordinal scale or interval one. The value of the Kendall's $\tilde{\tau}$ correlation coefficient should be calculated using the following formula: \begin{displaymath}
\tilde{\tau}=\frac{2(n_C-n_D)}{\sqrt{n(n-1)-T_X}\sqrt{n(n-1)-T_Y}},
\end{displaymath}

where:

  • $n_C$ – number of pairs of observations, for which the values of the ranks for the $X$ feature as well as $Y$ feature are changed in the same direction (the number of agreed pairs),
  • $n_D$ – number of pairs of observations, for which the values of the ranks for the $X$ feature are changed in the different direction than for the $Y$ feature (the number of disagreed pairs),
  • $T_X=\sum_{i=1}^s (t_{i_{(X)}}^2-t_{i_{(X)}})$, $T_Y=\sum_{i=1}^s (t_{i_{(Y)}}^2-t_{i_{(Y)}})$,
  • $t$ – number of cases included in a tie.

The formula for the $\tilde{\tau}$ correlation coefficient includes the correction for ties. This correction is used, when ties occur (if there are no ties, the correction is not calculated, because of $T_X=0$ i $T_Y=0$) .

Note

$\tau$ – the Kendall's correlation coefficient in a population;

$\tilde{\tau}$ – the Kendall's correlation coefficient in a sample.

The value of $\tilde{\tau}\in<-1; 1>$, and it should be interpreted the following way:

  • $\tilde{\tau}\approx1$ means a strong agreement of the sequence of ranks (the increasing monotonic correlation) – when the independent variable increases, the dependent variable increases too;
  • $\tilde{\tau}\approx-1$ means a strong disagreement of the sequence of ranks (the decreasing monotonic correlation) – when the independent variable increases, the dependent variable decreases;
  • if the Kendall's $\tilde{\tau}$ correlation coefficient is of the value equal or very close to zero, there is no monotonic dependence between analysed features (but there might exist another relation - a non monotonic one, for example a sinusoidal relation).

Spearman's versus Kendall's coefficient

  • for an interval scale with a normality of the distribution, the $r_s$ gives the results which are close to $r_p$, but $\tilde{\tau}$ may be totally different from $r_p$,
  • the $\tilde{\tau}$ value is less or equal to $r_p$ value,
  • the $\tilde{\tau}$ is an unbiased estimator of the population parameter $\tau$, while the $r_s$ is a biased estimator of the population parameter $R_s$.

EXAMPLE cont. (sex-height.pqs file)

2022/02/09 12:56

The Spearman's rank-order correlation coefficient

The test of significance for the Spearman's rank-order correlation coefficient is used to verify the hypothesis determining the lack of monotonic correlation between analysed features of the population and it is based on the Spearman's rank-order correlation coefficient calculated for the sample. The closer to 0 the value of $r_s$ coefficient is, the weaker dependence joins the analysed features.

Basic assumptions:

Hypotheses:

\begin{array}{cl}
\mathcal{H}_0: & R_s = 0, \\
\mathcal{H}_1: & R_s \ne 0.
\end{array}

The test statistic is defined by:

\begin{displaymath}
t=\frac{r_s}{SE},
\end{displaymath}

where $\displaystyle SE=\sqrt{\frac{1-r_s^2}{n-2}}$.

The value of the test statistic can not be calculated when $r_s=1$ lub $r_s=-1$ or when $n<3$.

The test statistic has the t-Student distribution with $n-2$ degrees of freedom.

The p-value, designated on the basis of the test statistic, is compared with the significance level $\alpha$:

\begin{array}{ccl}
$ if $ p \le \alpha & \Longrightarrow & $ reject $ \mathcal{H}_0 $ and accept $ 	\mathcal{H}_1, \\
$ if $ p > \alpha & \Longrightarrow & $ there is no reason to reject $ \mathcal{H}_0. \\
\end{array}

The settings window with the Spearman's monotonic correlation can be opened in Statistics menu → NonParametric testsmonotonic correlation (r-Spearman) or in ''Wizard''.

EXAMPLE (LDL weeks.pqs file)

The effectiveness of a new therapy designed to lower cholesterol levels in the LDL fraction was studied. 88 people at different stages of the treatment were examined. We will test whether LDL cholesterol levels decrease and stabilize with the duration of the treatment (time in weeks).

Hypotheses:

$\begin{array}{cl}
\mathcal{H}_0: & $In the population, there is no monotonic relationship between treatment time and LDL levels,$\\
\mathcal{H}_1: & $In the population, there is a monotonic relationship between treatment time and LDL levels.$
\end{array}$

Comparing $p$<0.0001 with a significance level $\alpha=0.05$ we find that there is a statistically significant monotonic relationship between treatment time and LDL levels. This relationship is initially decreasing and begins to stabilize after 150 weeks. The Spearman's monotonic correlation coefficient and therefore the strength of the monotonic relationship for this relationship is quite high at $r_s$=-0.78. The graph was plotted by curve fitting through local LOWESS linear smoothing techniques.

2022/02/09 12:56

The Kendall's tau correlation coefficient

The test of significance for the Kendall's $\tilde{\tau}$ correlation coefficient is used to verify the hypothesis determining the lack of monotonic correlation between analysed features of population. It is based on the Kendall's tau correlation coefficient calculated for the sample. The closer to 0 the value of tau is, the weaker dependence joins the analysed features.

Basic assumptions:

Hypotheses:

\begin{array}{cl}
\mathcal{H}_0: & \tau = 0, \\
\mathcal{H}_1: & \tau \ne 0.
\end{array}

The test statistic is defined by:

\begin{displaymath}
Z=\frac{3\tilde{\tau}\sqrt{n(n-1)}}{\sqrt{2(2n+5)}}.
\end{displaymath}

The test statistic asymptotically (for a large sample size) has the normal distribution.

The p-value, designated on the basis of the test statistic, is compared with the significance level $\alpha$:

\begin{array}{ccl}
$ if $ p \le \alpha & \Longrightarrow & $ reject $ \mathcal{H}_0 $ and accept $ 	\mathcal{H}_1, \\
$ if $ p > \alpha & \Longrightarrow & $ there is no reason to reject $ \mathcal{H}_0. \\
\end{array}

The settings window with the Kendall's monotonic correlation can be opened in Statistics menu → NonParametric testsmonotonic correlation (tau-Kendall) or in ''Wizard''.

EXAMPLE cont. (LDL weeks.pqs file)

Hypotheses: $\begin{array}{cl}
\mathcal{H}_0: & $In the population, there is no monotonic relationship between treatment time and LDL levels,$\\
\mathcal{H}_1: & $In the population, there is a monotonic relationship between treatment time and LDL levels.$
\end{array}$

Comparing p<0.0001 with a significance level $\alpha=0.05$ we find that there is a statistically significant monotonic relationship between treatment time and LDL levels. This relationship is initially decreasing and begins to stabilize after 150 weeks. The Kendall's monotonic correlation coefficient, and therefore the strength of the monotonic relationship for this relationship is quite high at $\tilde{\tau}$=-0.60. The graph was plotted by curve fitting through local LOWESS linear smoothing techniques.

2022/02/09 12:56

Contingency tables coefficients and their statistical significance

The contingency coefficients are calculated for the raw data or the data gathered in a contingency table.

The settings window with the measures of correlation can be opened in Statistics menu → NonParametric testsCh-square, Fisher, OR/RR option Measures of dependence… or in ''Wizard''.

The Yule's Q contingency coefficient

The Yule's $Q$ contingency coefficient (Yule, 19002)) is a measure of correlation, which can be calculated for $2\times2$ contingency tables.

\begin{displaymath}
Q=\frac{O_{11}O_{22}-O_{12}O_{21}}{O_{11}O_{22}+O_{12}O_{21}},
\end{displaymath}

where:

$O_{11}, O_{12}, O_{21}, O_{22}$ - observed frequencies in a contingency table.

The $Q$ coefficient value is included in a range of $<-1; 1>$. The closer to 0 the value of the $Q$ is, the weaker dependence joins the analysed features, and the closer to $-$1 or +1, the stronger dependence joins the analysed features. There is one disadvantage of this coefficient. It is not much resistant to small observed frequencies (if one of them is 0, the coefficient might wrongly indicate the total dependence of features).

The statistic significance of the Yule's $Q$ coefficient is defined by the $Z$ test.

Hypotheses:

$\begin{array}{cl}
\mathcal{H}_0: &Q=0,\\
\mathcal{H}_1: &Q\neq 0.
\end{array}$

The test statistic is defined by:

\begin{displaymath}
Z=\frac{Q}{\sqrt{\frac{1}{4}(1-Q^2)^2(\frac{1}{O_{11}}+\frac{1}{O_{12}}+\frac{1}{O_{21}}+\frac{1}{O_{22}})}}.
\end{displaymath}

The test statistic asymptotically (for a large sample size) has the normal distribution.

The p-value, designated on the basis of the test statistic, is compared with the significance level $\alpha$:

\begin{array}{ccl}
$ if $ p \le \alpha & \Longrightarrow & $ reject $ \mathcal{H}_0 $ and accept $ 	\mathcal{H}_1, \\
$ if $ p > \alpha & \Longrightarrow & $ there is no reason to reject $ \mathcal{H}_0. \\
\end{array}

The $\phi$ contingency coefficient

The Phi contingency coefficient is a measure of correlation, which can be calculated for $2\times2$ contingency tables.

\begin{displaymath}
\phi=\sqrt{\frac{\chi^2}{n}},
\end{displaymath}

where:

Chi-square – value of the $\chi^2$ test statistic,

$n$ – total frequency in a contingency table.

The $\phi$ coefficient value is included in a range of $<0; 1>$. The closer to 0 the value of $\phi$ is, the weaker dependence joins the analysed features, and the closer to 1, the stronger dependence joins the analysed features.

The $\phi$ contingency coefficient is considered as statistically significant, if the p-value calculated on the basis of the $\chi^2$ test (designated for this table) is equal to or less than the significance level $\alpha$.

The Cramer's V contingency coefficient

The Cramer's V contingency coefficient (Cramer, 19463)), is an extension of the $\phi$ coefficient on $r\times c$ contingency tables.

\begin{displaymath}
V=\sqrt{\frac{\chi^2}{n(w'-1)}},
\end{displaymath}

where:

Chi-square – value of the $\chi^2$ test statistic,

$n$ – total frequency in a contingency table,

$w'$ – the smaller the value out of $r$ and $c$.

The $V$ coefficient value is included in a range of $<0; 1>$. The closer to 0 the value of $V$ is, the weaker dependence joins the analysed features, and the closer to 1, the stronger dependence joins the analysed features. The $V$ coefficient value depends also on the table size, so you should not use this coefficient to compare different sizes of contingency tables.

The $V$ contingency coefficient is considered as statistically significant, if the p-value calculated on the basis of the $\chi^2$ test (designated for this table) is equal to or less than the significance level $\alpha$.

W-Cohen contingency coefficient

The $W$-Cohen contingency coefficient (Cohen (1988)4)), is a modification of the V-Cramer coefficient and is computable for $r\times c$ tables.

\begin{displaymath}
W=\sqrt{\frac{\chi^2}{n(w'-1)}}\sqrt{w'-1},
\end{displaymath}

where:

Chi-square – value of the $\chi^2$ test statistic,

$n$ – total frequency in a contingency table,

$w'$ – the smaller the value out of $r$ and $c$.

The $W$ coefficient value is included in a range of $<0; \max W>$, where $\max W=\sqrt{w'-1}$ (for tables where at least one variable contains only two categories, the value of the coefficient $W$ is in the range $<0; 1>$). The closer to 0 the value of $W$ is, the weaker dependence joins the analysed features, and the closer to $\max W$, the stronger dependence joins the analysed features. The $W$ coefficient value depends also on the table size, so you should not use this coefficient to compare different sizes of contingency tables.

The $W$ contingency coefficient is considered as statistically significant, if the p-value calculated on the basis of the $\chi^2$ test (designated for this table) is equal to or less than the significance level $\alpha$.

The Pearson's C contingency coefficient

The Pearson's $C$ contingency coefficient is a measure of correlation, which can be calculated for $r\times c$ contingency tables.

\begin{displaymath}
C=\sqrt{\frac{\chi^2}{\chi^2+n}},
\end{displaymath}

where:

Chi-square – value of the $\chi^2$ test statistic,

$n$ – total frequency in a contingency table.

The $C$ coefficient value is included in a range of $<0; 1)$. The closer to 0 the value of $C$ is, the weaker dependence joins the analysed features, and the farther from 0, the stronger dependence joins the analysed features. The $C$ coefficient value depends also on the table size (the bigger table, the closer to 1 $C$ value can be), that is why it should be calculated the top limit, which the $C$ coefficient may gain – for the particular table size:

\begin{displaymath}
C_{max}=\sqrt{\frac{w'-1}{w}},
\end{displaymath}

where:

$w'$ – the smaller value out of $r$ and $c$.

An uncomfortable consequence of dependence of $C$ value on a table size is the lack of possibility of comparison the $C$ coefficient value calculated for the various sizes of contingency tables. A little bit better measure is a contingency coefficient adjusted for the table size ($C_{adj}$):

\begin{displaymath}
C_{adj}=\frac{C}{C_{max}}.
\end{displaymath}

The $C$ contingency coefficient is considered as statistically significant, if the p-value calculated on the basis of the $\chi^2$ test (designated for this table) is equal to or less than significance level $\alpha$.

EXAMPLE (sex-exam.pqs file)

There is a sample of 170 persons ($n=170$), who have 2 features analysed ($X$=sex, $Y$=passing the exam). Each of these features occurs in 2 categories ($X_1$=f, $X_2$=m, $Y_1$=yes, $Y_2$=no). Basing on the sample, we would like to get to know, if there is any dependence between sex and passing the exam in an analysed population. The data distribution is presented in a contingency table:}

\begin{tabular}{|c|c||c|c|c|}
\hline
\multicolumn{2}{|c||}{Observed frequencies}& \multicolumn{3}{|c|}{passing the exam}\\\cline{3-5}
\multicolumn{2}{|c||}{$O_{ij}$} & yes & no & total \\\hline \hline
\multirow{3}{*}{sex}& f & 50 & 40 & 90 \\\cline{2-5}
& m & 20 & 60 & 80 \\\cline{2-5}
& total & 70 & 100 & 170\\\hline
\end{tabular}

The test statistic value is $\chi^2=16.33$ and the $p$ value calculated for it: p<0.0001. The result indicates that there is a statistically significant dependence between sex and passing the exam in the analysed population.

Coefficient values, which are based on the $\chi^2$ test, so the strength of the correlation between analysed features are:

$C_{adj}$-Pearson = 0.42.

$V$-Cramer = $\phi$ = $W$-Cohen = 0.31

The $Q$-Yule = 0.58, and the $p$ value of the $Z$ test (similarly to $\chi^2$ test) indicates the statistically significant dependence between the analysed features.

2022/02/09 12:56
1)
Kendall M.G. (1938), A new measure of rank correlation. Biometrika, 30, 81-93
2)
Yule G. (1900), On the association of the attributes in statistics: With illustrations from the material ofthe childhood society, and c. Philosophical Transactions of the Royal Society, Series A, 194,257-3 19
3)
Cramkr H. (1946), Mathematical models of statistics. Princeton, NJ: Princeton University Press
4)
Cohen J. (1988), Statistical Power Analysis for the Behavioral Sciences, Lawrence Erlbaum Associates, Hillsdale, New Jersey
en/statpqpl/korelpl/nparpl.txt · ostatnio zmienione: 2022/02/13 18:28 przez admin

Narzędzia strony