English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I need it on Tuesday.. Thanks for looking and answering...

2006-10-27 16:43:57 · 3 answers · asked by Mary Eda 2 in Science & Mathematics Mathematics

3 answers

my stat teacher used to tell our class that if not because of gambling, there would be no probability and therefore statistics

i've found this site

http://www.york.ac.uk/depts/maths/histstat/stiglercontents.htm
open it yourself. it's quite lengthy

as an additional, from http://www.ship.edu/~cgboeree/histofstats.html



1654 -- Pascal -- mathematics of probability, in correspondence with Fermat


1662 -- William Petty and John Graunt -- first demographic studies

1713 -- Jakob Bernoulli -- Ars Conjectandi

1733 -- DeMoivre -- Approximatio; law of error (similar to standard deviation)

1763 -- Rev. Bayes -- An essay towards solving a problem in the Doctrine of Chances, foundation for "Bayesian statistics"

1805 -- A-M Legendre -- least square method

1809 -- C. F. Gauss -- Theoria Motus Corporum Coelestium

1812 -- P. S. Laplace -- Théorie analytique des probabilités

1834 -- Statistical Society of London established

1853 -- Adolphe Quetelet -- organized first international statistics conference; applied statistics to biology; described the bell-shaped curve


1877 -- F. Galton -- regression to the mean

1888 -- F. Galton -- correlation

1889 -- F. Galton -- Natural Inheritance

1900 -- Karl Pearson -- chi square; applied correlation to natural selection

1904 -- Spearman -- rank (non-parametric) correlation coefficient

1908 -- "Student" (W. S. Gossett) -- The probable error of the mean; the t-test

1919 -- R. A. Fisher -- ANOVA; evolutionary biology

1930's -- Jerzy Neyman and Egon Pearson (son of Karl Pearson) -- type II errors, power of a test, confidence intervals

and aother timetime from http://www.anselm.edu/homepage/jpitocch/biostats/biostatstime.html

open them yourself... goodluck

2006-10-27 17:11:48 · answer #1 · answered by lazareh 2 · 1 0

Statistics is a mathematical science pertaining to the collection, analysis, interpretation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities; it is also used for making informed decisions in all areas of business and government.

Statistical methods can be used to summarize or describe a collection of data; this is called descriptive statistics. In addition, patterns in the data may be modeled in a way that accounts for randomness and uncertainty in the observations, to draw inferences about the process or population being studied; this is called inferential statistics. Both descriptive and inferential statistics can be considered part of applied statistics. There is also a discipline of mathematical statistics, which is concerned with the theoretical basis of the subject.

The word statistics is also the plural of statistic (singular), which refers to the result of applying a statistical algorithm to a set of data, as in employment statistics, accident statistics, etc.

2006-10-27 16:50:40 · answer #2 · answered by M. Abuhelwa 5 · 1 0

Etymology
The word statistics ultimately derives from the modern Latin term statisticum collegium ("council of state") and the Italian word statista ("statesman" or "politician"). The German Statistik, first introduced by Gottfried Achenwall (1749), originally designated the analysis of data about the state, signifying the "science of state" (then called political arithmetic in English). It acquired the meaning of the collection and classification of data generally in the early 19th century. It was introduced into English by Sir John Sinclair.

Thus, the original principal purpose of Statistik was data to be used by governmental and (often centralized) administrative bodies. The collection of data about states and localities continues, largely through national and international statistical services. In particular, censuses provide regular information about the population.

During the 20th century, the creation of precise instruments for public health concerns (epidemiology, biostatistics, etc.) and economic and social purposes (unemployment rate, econometry, etc.) necessitated substantial advances in statistical practices. This became a necessity for Western welfare states developed after World War I which had to develop a specific knowledge of their "population". Philosophers such as Michel Foucault have argued that this constituted a form of "biopower", a term which has since been used by many other authors.


Origins in probability
The mathematical methods of statistics emerged from probability theory, which can be dated to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. [1]

The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given.

Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve. He deduced a formula for the mean of three observations. He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.

The method of least squares, which was used to minimize errors in data measurement, is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error. He gave two proofs, the second being essentially the same as John Herschel's (1850). Carl Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870).

Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known.

In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.

Adolphe Quetelet (1796-1874), another important founder of statistics, introduced the notion of the "average man" (l'homme moyen) as a means of understanding complex social phenomena such as crime rates, marriage rates, or suicide rates.

------------------------------------------------------------------------------------

Hope this helps. Visit the website below for more info.

2006-10-27 16:50:27 · answer #3 · answered by BigEyedFish 6 · 1 0

fedest.com, questions and answers