Home | About Mathguru | Advertisements | Teacher Zone | FAQs | Contact Us | Login

If you like what you see in Mathguru
Subscribe Today
For 12 Months
US Dollars 12 / Indian Rupees 600
Available in 20 more currencies if you pay with PayPal.
Buy Now
No questions asked full moneyback guarantee within 7 days of purchase, in case of Visa and Mastercard payment

Example:Finding Standard Deviation

Post to:

Bookmark and Share



Assumed mean


In statistics the assumed mean is a method for calculating the arithmetic mean and standard deviation of a data set. It simplifies calculating accurate values by hand. Its interest today is chiefly historical but it can be used to quickly estimate these statistics. There are other rapid calculation methods which are more suited for computers which also ensure more accurate results than the obvious methods.



The method depends on estimating the mean and rounding to an easy value to calculate with. This value is then subtracted from all the sample values. When the samples are classed into equal size ranges a central class is chosen and the count of ranges from that is used in the calculations. For example for people's heights a value of 1.75m might be used as the assumed mean.

For a data set with assumed mean x0 suppose:


or for a sample standard deviation using Bessel's correction:

(Our solved example in mathguru.com uses this concept)






In probability theory and statistics, the variance is used as a measure of how far a set of numbers are spread out from each other. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean (expected value). (Our solved example in mathguru.com uses this concept)

In particular, the variance is one of the moments of a distribution. In that context, it forms part of a systematic approach to distinguishing between probability distributions. While other such approaches have been developed, those based on moments are advantageous in terms of mathematical and computational simplicity.

The variance is a parameter describing in part either the actual probability distribution of an observed population of numbers, or the theoretical probability distribution of a not-fully-observed population of numbers. In the latter case a sample of data from such a distribution can be used to construct an estimate of its variance: in the simplest cases this estimate can be the sample variance.




Standard deviation


Standard deviation is a widely used measurement of variability or diversity used in statistics and probability theory. It shows how much variation or "dispersion" there is from the average (mean, or expected value). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data are spread out over a large range of values.

Technically, the standard deviation of a statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler though practically less robust than the average absolute deviation. A useful property of standard deviation is that, unlike variance, it is expressed in the same units as the data. (Our solved example in mathguru.com uses this concept)




The above explanation is copied from Wikipedia, the free encyclopedia and is remixed as allowed under the Creative Commons Attribution- ShareAlike 3.0 Unported License.