An article on BBC News about the Ebola outbreak got me thinking about data. In all the media reports, the virus is described as having a shockingly high 90% fatality rate. This article says that the key words in reports is “up to” and that the fatality rate is often lower. I looked up the actual figures from the WHO, and sure enough the statistics show a less than 90% rate.
In actuality, the fatality rates for the current outbreak vary quite widely, from 15% to 100%(!!), depending on what figures you choose to include. Do you look at confirmed cases? Probable? Suspected? Or do you include them all? And do you look at individual countries or average them all together?
If you look at the data broken down beyond the headline statistics, what emerges is a much more complicated picture of the Ebola outbreak. It is worse in certain areas and better in others. Depending on the response to the disease, its deadliness may be reduced and – presumably – its virulence diminished. Community practices, health care facilities, etc. all change how the disease affects people.
Unfortunately, most people don’t look that deeply at the statistics. They hear about a horrible disease, see the “90% fatal” figure and start getting hysterical. While the Ebola outbreak is a horrible and tragic event – and does look like it’s going to get much worse – such overreaction may cause individuals and nations to make costly missteps. There are deadlier diseases, and certain nations’ efforts may be more productively directed at other health issues.
For me as a teacher, this makes me think of our students’ test scores and similar data that we use to assess, group and rate them. Too often, teachers and administrators latch on to one figure and use it to determine a course of action regarding an individual student, a class or a school. A student scores at the 20th percentile on her MAP test? She needs remedial reading lessons. A class earns 90% success rate on their IB diploma scores? We should celebrate!
The truth may be far more nuanced. A closer look at the disaggregated data (if such data is available) may reveal different areas of strength and weaknesses, and might help direct intervention into more productive areas. It may also reveal that any need for intervention might be exaggerated.
Unfortunately, we often fail to get beyond the overall statistic. The student (or school or class) gets labelled as “29%” or “a 4” or whatever, and that label becomes the perceived reality.
As schools dive into “data-driven decisions,” it is well worth reflecting on what exactly the data shows us …if it shows us anything meaningful at all. It’s worth looking for more detailed, nuanced ways at looking at student performance. (Funnily enough, getting to that level of nuance might mean going back to the oldest “data-driven” performance evaluator: the classroom teacher, whose judgements are based on daily collection of various data points.)