What is significance & Its Level


Significance in statistics, often referred to as statistical significance or level of significance ($\alpha$), is fundamentally the probability of rejecting a true null hypothesis ($H_0$).



  1. Definition and Measurement of Significance:

    • Hypothesis Testing is sometimes referred to as significance testing. It is the process of inferring from a sample whether to reject a certain statement about a population.
    • The level of significance ($\alpha$) is the probability that the statistical test results in rejecting the null hypothesis ($H_0$) when $H_0$ is actually true. This mistake is known as a Type I error.
    • By convention, a probability of less than 5% ($\alpha < 0.05$ or a 1/20 chance) is usually considered an unlikely event. If the difference is significant at the 5% level, it is often expressed as $p < 0.05$.
    • When results are considered "statistically significant," it means the sample data is incompatible with the null hypothesis, leading to its rejection in favor of the alternate hypothesis ($H_1$).
    • The $p$-value (or significance probability) is a post hoc measure of error. It is the probability, calculated assuming $H_0$ is true, that the test statistic takes a value equal to or more extreme than the value actually observed. A small $p$-value signifies a strong rejection of $H_0$.
  2. Statistical Tests for Significance (Hypothesis Testing): A wide range of inferential statistical tests are used to determine significance, typically categorized based on the type of data (continuous/discrete) and assumptions (parametric/nonparametric). These tests compare an observed test statistic (a ratio based on sample data) to a preset critical value or calculate a $p$-value to determine if the result is extreme enough to reject $H_0$.

    Common statistical tests employed for significance testing include:

    • Parametric Procedures (generally assume normality and homogeneity of variance):

      • $t$-Tests (used primarily when comparing one or two means, or paired data):
        • One-Sample $t$-Test.
        • Two-Sample $t$-Test.
        • Matched Pair $t$-Test (Paired $t$-Test).
      • Analysis of Variance (ANOVA) (used for comparing means of three or more groups, relying on the $F$-distribution).
      • $Z$-Tests (used for large samples, especially concerning proportions or means with known population variance):
        • $Z$-Test of Proportions (One-sample or Two-sample case).
    • Tests for Relationships and Association:

      • Correlation and Regression (to test if a relationship exists, usually $H_0: r_{xy} = 0$ or $H_0: \beta_1 = 0$).
      • Chi Square ($\chi^2$) Tests (used when only discrete variables are involved):
        • Chi Square Goodness-of-Fit Test.
        • Chi Square Test of Independence (or Test for Association).
        • Related tests: Fisher’s exact test, McNemar's test, Cochran-Mantel-Haenszel test.
    • Nonparametric Tests (alternatives used when assumptions like normality are not met):

      • Wilcoxon Signed Rank Test (alternative to paired $t$-test).
      • Wilcoxon Rank Sum Test (Mann–Whitney $U$ test, alternative to two-sample $t$-test).
      • Kruskal–Wallis Test (alternative to One-Way ANOVA).
      • Sign Test (alternative where $\mu$ is interpreted as the difference of medians).

No comments:

Post a Comment

What is significance & Its Level

Significance in statistics, often referred to as statistical significance or level of significance ($\alpha$), is fundamentally the proba...