Saturday, May 19, 2018

Revisit the Difficulty in Teaching Statistics

Revisit The Difficulty In Teaching Statistics.html

Revisit The Difficulty in Teaching Statistics

June 2017

Many years ago, I listened the famous​ talk on why statistics is like literature and mathematics is like music. The point is that, mathematics, as music, relies on deduction; statistics, on the other hand, is for induction. Suppose that A causes B and we either know A or B. If we know A, we use mathematics to deduce what will come after A. If we know B, we use statistics to learn about the cause of B. Deduction is guided by clearly defined logic. Induction doesn't have a set of rules. Before we take our first statistics class, we are entirely emersed in deduction. When we first learn statistics, we always treat it as a topic of mathematics. The beginning of the class is inevitably probability, which reinforces the impression of a deductive thinking. When statistics portion of the class starts, we have already impossibly sunken into the deduction mode. Most of us could never dig ourselves out of it. By the time we take our first graduate level statistics class, we probably have forgotten all about the little statistics we learned in the undergraduate class.

What makes the learning even harder is that the graduate level class is almost always taught by professors from statistics department on a rotational basis. No statistics professors want to teach an applied course in a science field. Student teaching evaluations from these courses are always below average, regardless of the quality of teaching. With professors either teaching​ the class for the first time ever or first time after a long hiatus, the teaching quality is always less than optimal.

From a student's perspective, statistics is impossible to learn well. The thought process of the modern statistics is the hypothetical deduction. To understand the concept we need to know a lot of science to be able to propose reasonable hypotheses; we also need to know a lot about probability distributions in order to know which distribution is most likely a relaven one. New students know neither. As a result, we teach a few simple models (t-test, ANOVA, regression). A good student can master each model and manage to use them in simple applications.

Recently, I read Neyman and Pearson (1933) to understand the development of the Neyman-Pearson Lemma. The first two pages of the article are particularly stimulating. Neyman and Pearson traced statistical hypothesis testing back to Bayes, as the test of a cause-and-effect hypothesis. They then described the what looks like the hypothetical deductive process of hypothesis testing and concluded that "no test of this kind could give useful results." The Neyman-Pearson lemma is then described as a "rule of behavior" with regard to hypothesis $H$. When following this rule, we "shall reject H when it is true not more, say, than once in a hundred times, and we shall reject H sufficiently often when it is false." Furthermore, "such a rule tells us nothing as to whether in a particular case $H$ is true or false" when the test result is statistically significant or not. It appears to me that the frequency-based classical statistics is really designed for engineers, whereas the Bayesian statistics is suited for scientists.

If teaching classical statistics is hard, teaching Bayesian statistics is harder (especially to American students who are poorly trained in calculus).

No comments:

Log or not log

LOGorNOTLOG.html Log or not log, that is the question May 19, 2018 In 2014 I taught a special topics class on statistical i...