English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

2006-12-18 03:55:56 · 2 answers · asked by wilburrr 2 in Science & Mathematics Engineering

2 answers

Standard deviation is not something that can be generalized as "typical" for any application. Rather, the specific standard deviation for a given exam will characterize the range of scores. I recently took a course where the statistics for the midterm and final were made available. With respective average scores of 75.71 and 76.25 out of 100, the standard deviations were 12.19 and 16.04. Homework was also graded out of 100, and on the eleven assignments, standard deviations ran from as low as 4.41 to as high as 13.02. These are specific examples, and are not typical of anything.

2006-12-18 03:58:26 · answer #1 · answered by DavidK93 7 · 0 0

Without the numbers, no one can give the answer to your specific situation. But the theory goes like this: If the distribution of exam scores form the classic bell curve, then one standard deviation (SD) in each direction from the mean will encompass about 70 percent of the scores. Those people get the C's. Scores more than one SD but less than two SD from the mean get the B's or D's depending on direction from the mean. And so on...

2006-12-18 12:51:45 · answer #2 · answered by Stan the Rocker 5 · 0 0

fedest.com, questions and answers