English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

im studing opreation management and i have exam

2007-02-02 02:34:10 · 9 answers · asked by Anonymous in Science & Mathematics Mathematics

9 answers

In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. It is defined as the square root of the variance.

The standard deviation is the root mean square (RMS) deviation of the values from their arithmetic mean. For example, in the population {4, 8}, the mean is 6 and the standard deviation is 2. This may be written: {4, 8} ≈ 6±2. In this case 100% of the values in the population are within one standard deviation of the mean.

Standard deviation is the most common measure of statistical dispersion, measuring how widely spread the values in a data set are. If the data points are all close to the mean, then the standard deviation is close to zero. If many data points are far from the mean, then the standard deviation is far from zero. If all the data values are equal, then the standard deviation is zero.

A large standard deviation indicates that the data points are far from the mean and a small standard deviation indicates that they are clustered closely around the mean.

For example, each of the three data sets (0, 0, 14, 14), (0, 6, 8, 14) and (6, 6, 8, 8) has a mean of 7. Their standard deviations are 7, 5, and 1, respectively. The third set has a much smaller standard deviation than the other two because its values are all close to 7. In a loose sense, the standard deviation tells us how far from the mean the data points tend to be. It will have the same units as the data points themselves. If, for instance, the data set (0, 6, 8, 14) represents the ages of four siblings, the standard deviation is 5 years.

As another example, the data set (1000, 1006, 1008, 1014) may represent the distances traveled by four athletes in 3 minutes, measured in meters. It has a mean of 1007 meters, and a standard deviation of 5 meters.

In the age example above, a standard deviation of 5 may be considered large; in the distance example above, 5 may be considered small.

Standard deviation may serve as a measure of uncertainty. In physical science for example, the reported standard deviation of a group of repeated measurements should give the precision of those measurements. When deciding whether measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then we consider the measurements as contradicting the prediction. This makes sense since they fall outside the range of values that could reasonably be expected to occur if the prediction were correct and the standard deviation appropriately quantified.

2007-02-02 07:33:10 · answer #1 · answered by Billy 2 · 0 0

I'm assuming you have an understanding of an average (or 'mean'). If you have a collection of numbers, they have an average. Some of the numbers in the collection will be closer to the average, some will be further away. The standard deviation is a measure of 'how scattered the numbers are, compared to the average'. For example, take the ages of children in a school year. Say the average is 12. Then all the children in the year are probably 11, 12 or 13 years old. So all the numbers are quite close to the average number (nobody is more than one year away from the average), so you have a low standard deviation. On the other hand, take the ages of everyone in the town. You'll have a whole range of them from the very young to the very old, with an average somewhere in the middle. Some people will be close in age to the average age, some won't be, and the very young and very old people will be quite a way off. So the standard deviation will be larger, because there's more of a spread of answers.

2016-03-15 04:15:50 · answer #2 · answered by Anonymous · 0 0

Simple Explanation Of Standard Deviation

2016-10-18 07:03:55 · answer #3 · answered by ? 4 · 0 0

Standard deviation is the average distance your data points are away from the mean. For example, if your data was the height, in inches, of 18 year old army recruits and the mean measurement was 70 inches, the standard deviation would tell you how likely you would be to encounter a soldier who was more then two inches shorter or taller then the mean--without the standard deviation you could not address this kind of question. To calculate the standard deviation you take the sum (x-m)^2/(n-1) where x represents you data points, n is the number of data points, and m is your mean. Hope this helps.

2007-02-02 02:42:53 · answer #4 · answered by bruinfan 7 · 0 0

First note that the standard deviation is NOT the average or mean amount each data point varies from the mean. The mean distance of each data point from the mean is zero.

e.g., data points: 3, 4, 6, 7; mean = 5; distances from the mean are -2, -1, +1, +2; mean of these distances = 0.

The standard deviation is the square root of the mean of the squared distance from the mean.

1. To calculate it first square the distances from the mean.

e.g., for the data points above: 4, 1, 1, 4 are the squared distances from the mean.

2. Then total these squared distances.

e.g., for the data points above: 4+1+1+4= 10. (This total is often known as the Sums of Squares or SS for the data).

3. The calculate the mean of the squared deviations (also known as the variance) by dividing SS by N. (N is the number of data points you are calculating the SD of)

e.g., for the data points above = 10/4 = 2.5.

4. Take the square root of the variance to get the SD.

e.g., square root of 2.5 = 1.5811 (to 4 d.p.)

Complications:

(1) This description is for a formula that treats the data as a population. If the data are a sample from an infinite population and one wants an inferential statistic that gives you an estimate of the population SD then one must divide by (N-1) rather than N in step 3.

e.g., SD = 10/3 = 3.3333 (to 4 d.p.)

This number is always bigger - and allows for the fact that a large set of data points will tend to be more variable than a small data set and that as the data set gets larger is more closely resembles the population it is sampled from.

(2) There are alternative formulae for the SD that produce identical results but simplify the hand calculations - so you may be taught a different, but equivalent formula for hand calculation.

Why calculate the SD? What does it mean? In many ways the variance is a more versatile statistic than the SD, but the SD is preferred when reporting the mean because it is in on the same scale as the raw data (so if the mean is 5 seconds the SD can be thought of as 1.58 seconds). In one sense we can think of the use of "squares" as a trick to get rid of the signs (plus or minus) of the distance/deviations.

I usually think of the SD as a measure of how far a typical data point is from the mean ... in our example a typical data point is 1.58 units from the mean. This works particularly well if you know the approximate distribution of the population your data are from. For example, for a normal distribution approximately 95% of data points are less than 2 SD from the mean. About 2/3 are 1 SD from the mean.

2007-02-02 07:17:54 · answer #5 · answered by Thom 2 · 0 0

Conceptually, SD is the amount of 'scatter' or variation in your data. For example, if every one in your sample were all the same height and you calculated its SD, it would be zero since there is no variation. Each person's height would be the same as the average/mean for the group. So a small SD indicates that the heights are clustered closely around the mean, while a large SD indicates that they are far from the mean. You could say that the SD of a population or sample is a measure of how much the data points in question are tending towards the mean or average.

2007-02-02 02:55:25 · answer #6 · answered by Anonymous · 1 0

The standard deviation simply put is the average amount that each trial varies from the mean of the group.

2007-02-02 02:40:10 · answer #7 · answered by Anonymous · 0 0

it is a measure of how a set of data varies from the average (mean)

2007-02-02 04:34:35 · answer #8 · answered by Colin S 2 · 0 0

Does

http://www.bbc.co.uk/scotland/education/bitesize/standard/mathsII/statistics/standard-deviation_rev2.shtml

help?

2007-02-02 08:41:30 · answer #9 · answered by Anonymous · 0 0

fedest.com, questions and answers