COPY THIS CODE 07TD7AO7 FOR 20% OFF ON ALL CUSTOM WORKS FROM OUR PARTNER ESSAYBOX.ORG

Extensive Biography Of Sir Ronald Aylmer Fisher Philosophy

Essay add: 14-11-2017, 14:01   /   Views: 3

This project will begin by presenting an extensive biography of Sir Ronald Aylmer Fisher. He was a genius who almost single-handedly created the foundations for modern statistical science. [11, p.738] Therefore it is vitally important to acknowledge his life, which rewarded statistics with its foundations.

We shall discuss many of the basic ideas of statistics which are required to understand statistical estimation. When scientists carry out experiments they use the observed data obtained from the random samples, in order to gain valuable knowledge about the population which they sampled from. This is the underlying idea of the frequentist approach to statistical estimation. Whereas, the Bayesian approach is different because it assumes prior knowledge about the population before the observations are made. This paper will give an in-depth insight to the frequentist approach to statistical estimation, which was initiated by Sir Ronald Aylmer Fisher.

There will be a formal introduction to the method of maximum likelihood estimation, which is by far, the most popular technique for deriving estimators. [4, p.289] This method is considered to be better than the method of Least Squares and the method of Moments, because it produces estimators that acquire all the optimal properties. Although having both advantages and disadvantages like any other point estimation methods, it is still superior and more valuable.

This will be followed by the historical revolution of maximum likelihood estimation, which was consolidated by Sir Ronald Aylmer Fisher's contributions. He produced a simple recipe that purports to lead to the optimum solution for all parametric problems and beyond, and not only promises an optimum estimate, but also a simple all-purpose assessment of its accuracy. [13, p.598] A brief account of the shortcomings of the methods used before Fisher's work will be given to emphasize why Fisher was motivated to carry out his studies. Along with this, a discussion of Fisher's challenging mathematical study over the period 1912-1922 will be evaluated briefly in order to account for the approval of his method.

Furthermore, another brief account will be given to illustrate Fisher's influence on recent progress of the method. The properties of maximum likelihood estimators will be discussed in detail and contrasted to estimators obtained from other methods of estimation. The estimators that satisfy the criteria of consistency, efficiency and sufficiency are known as the good estimators [9, p. 6]. Fisher formulated this in his work after he had produced the well-defined method. It will be shown through this paper that maximum likelihood estimators possess these qualities making them more advanced than other estimators.

Ronald Aylmer Fisher's Biography

Sir Ronald Aylmer Fisher was a British Statistician born on the 17th February 1890 in London, England. He started attending Harrow (independent boys' school) in 1904. There he performed outstandingly well in a mathematical essay competition [15] and was acknowledged for his achievement by winning the Neeld Medal in 1906. He carried on with his education further and enrolled to study Mathematics and astronomy at Cambridge University in October 1909. During his time at university, he developed an interest in biology and was moved to form a Cambridge University Eugenics Society. Having successfully completed his course he graduated with distinction in the Mathematical Tripos of 1912 [15]. Again he was recognised for his achievements and as a result was awarded a Wollaston studentship. With this support, he continued his education by studying the theory of errors. It was Fisher's interest in the theory of errors that eventually led him to investigate statistical problems [15]. During his final year as an undergraduate he wrote the paper On an absolute criterion for fitting frequency curves. This paper illustrated his interest in statistical problems and his initial theory of the method of maximum likelihood.

After leaving Cambridge University in 1913, he travelled to Canada and worked on a farm for a small period of time. Then he returned back to England and worked temporarily for various institutes until 1919, when the First World War ended. After the War had ended, he was in a position to stabilise his future career, when he was offered two jobs at the same time. Karl Pearson offered him the post of Chief Statistician at the Galton Laboratories and he was also offered the post of statistician at the Rothamsted Agricultural Experiment Station. [15]

He rejected the offer made by Pearson and accepted the position at Rothamsted (a well-known establishment), in which the effects of nutrition and soil types on plant fertility are studied. This position appealed more to Fisher clearly because of his experience of and interest in farming. Hence he made many contributions to statistics and genetics whilst working there.Up to that time he had only a few papers to his credit, including two which were most important, one in mathematical statistics, and the other in genetics-the two fields in which he ultimately made his greatest contributions.[2] Charles Robert Darwin formulated the idea of natural selection in the 19th century before genetics was discovered. Naturally, Fisher was interested in Darwin's evolutionary theory, which concluded that individuals having advantageous variations are more likely to survive and reproduce compared to those without the advantageous variations. Fisher carried out many biological experiments at Rothamsted and the results he obtained about genetics where analysed using statistical methods.

In 1921 he introduced the concept of likelihood enabling him to present the new definition of statistics. Furthermore, he developed the criteria for estimation by defining concepts required such as consistency, efficiency and sufficiency. Fisher [15] published a number of important texts; in particular Statistical methods for Research Workers (1925). This book was extremely significant as it is essentially a manual for the methods that he formulated whilst he was working at Rothamsted.

Karl Pearson (Galton Professor of Eugenics) retired from University College, London in 1933.His department was divided into two, namely Statistics and Eugenics. Fisher was appointed as the new Galton Professor of Eugenics (Pearson's successor) and Egon S Pearson (Karl Pearson's son) was appointed as head of the Statistics Department. Fisher continued his study of genetics and statistics here for ten years. Although, he again faced the distressing situations of the Second World War period, he still continued with his hard work and contributions. Fisher's outstanding life long work was recognised continuously by the Royal Society. He was elected a Fellow of the Royal Society in 1929, was awarded the Royal Medal of the Society in 1938 and was awarded Darwin Medal of the Society in 1948.[15]

At the end of this difficult period in 1943 he became a Balfour professor of genetics at Cambridge University where he had initially started his own distinguished educational journey. After his retirement in 1957 he spent the last few years of his life at the University of Adelaide, Australia as a research fellow.

Fundamentals of statistics

Statistics is an important branch of applied mathematics and one may consider it to be the study of observational data using mathematical tools. The observational data is obtained from experiments carried out on a representative sample of an adequate size from the population. This idea of "population" is applicable to not only living creatures but also to non-living objects. In reality the size of a population may be infinitely large, hence the population could be divided into distinguishable subgroups. This would obviously make the population size more condensed and allow one to take a reasonable sample. Just as a single observation may be regarded as an individual, and its repetition as generating a population, so the entire result of an extensive experiment may be regarded as but one of a population of such experiments. [5, p.2].

If we can find a mathematical form for the population which adequately represents the data, and then calculate from the data the best possible estimates of the required parameters, then it would seem that there is little, or nothing, more that the data can tell us; we shall have extracted from it all the available relevant information.[5, p.7]

The way that the population is distributed can be represented by a mathematical equation that involves a certain number, usually a few, of parameters or "constants" entering into the mathematical formula. [5, p.7] The population is characterized by these parameters. Thus knowing the precise values of these will allow us to know everything that we could possibly find out about the population. Although, it is not possible to determine the precise value of this parameter it can be approximated using statistical estimation methods. It is also important to note that no matter how good our approximations are, the estimates (also called statistics) of the parameters will still be imprecise.

The statistics will change from sample to sample within the population. For example if we calculate the sample mean (statistics) for many random samples taken from the population, we will observe that each time the mean will change. Clearly one can represent the distribution of these statistics mathematically. It is important to note that if the distribution of the sample from which the statistics is derived is known, and then it is possible to find the distribution of the statistics.

The method of maximum likelihood

This is the best known, most widely used, and most important of the methods of estimation [7, p.41]. The name of this method gives its simple idea away. Therefore in this method we find the value of which maximizes . [7, p.41] This indicates that the likelihood function plays a fundamental role in this method. The function allows us to estimate the unknown parameters based on the known observed data obtained from the random samples. Clearly this is unlike the idea of probability, which allows us to forecast the unknown outcomes (observation), based on the known parameters. However it is important to note that the meaning of these two key words in statistics differs but even so they hold a mathematical relationship. The likelihood of a parameter is proportional to the probability of the data.[15] Fisher (1921) was the first to establish this fundamental relationship and consequently his amazing work followed.

Revolution

The method of maximum likelihood occurs in various rudimentary forms before Fisher, but not under this name [10, p.214]. Although, this method was not developed it still was an important problem in need of attention. This brings out the following question: Why was it necessary to develop a new method of estimation? Clearly the simple answer to this question is that, the existing methods of estimation were not good enough. This could be interpreted in different ways depending on the nature of the estimation problem. In the process of developing new ideas related to the existing methods Carl Friedrich Gauss (1816), Gotthilf Heinrich Ludwig Hagen (1837) and Francis Ysidro Edgeworth (1909) made some contributions that had implied the method of maximum likelihood. However, they all failed in some way or another because their procedures did not hold a satisfactory proof. Although their ideas are shown in some areas of Fisher's work, it is stated that he did not know these results when he wrote his first papers on maximum likelihood [10, 215].

Gauss (1809) had derived the normal distribution that initiated the development for the method of maximum likelihood. Pierre Simon Laplace (1812) used the method of moments to show that is a biased estimate of the variance, . Also, Gauss (1823) used his frequentist approach of the method of least squares to prove that is an unbiased estimate of [10, p.215]. One of the biggest problems with their estimation procedure was that their estimator depends on the parameterization of the model, which means that the resulting estimate is arbitrary [9, p.7].

Fisher wanted to create a new method, which gave estimates that are invariant under parameter transformation. This consideration would have eliminated such criteria as that the estimate should be "unbiased". [6, p.146] He introduced his new method of estimation in the paper on absolute criterion for fitting frequency curves [17]. This paper begins with his criticism of the existing methods: the method of least squares because he thought it was not applicable to frequency curves and arbitrariness arises in the scaling of the abscissa line [Fisher]; and the method of moments because it is arbitrary and doesn't define a rule for choosing a moment to estimate equations. Although he introduced the method in this paper, it is imperceptible to the reader that he is stating a new method. The reason for this is that it seems as if he is using a modified version of the existing inverse probability methods. Hence, the paper wasn't given much importance at the time. This obviously had set him on a mission to find out what had caused this disapproval and misunderstanding.

Fisher was the first to introduce the concept of Likelihood in 1921.After years of study he discovered the distinction and relationship between the likelihood and inverse probability. This was a breakthrough for his method. He introduced a formal definition of likelihood and explained the importance of the likelihood function in his method. This concept was also one of his great achievements in the development of statistics. Therefore, it allowed him to rectify the mistake shown in the disapproved paper on absolute criterion for fitting frequency curves [17]. In this paper, he initially presented the method, which was incorrectly derived from the idea of inverse probability. I must indeed plead guilty in my original statement of the Method of Maximum Likelihood (1912) to having based my argument upon the principle of inverse probability [16 p.]. Hence he had corrected the misinterpretation of that paper, which led to the successful achievement of producing his well-defined method of estimation in 1922.

Fisher's work was continuously criticized by Karl Pearson, which had caused an permanent atmosphere between them. Pearson began his criticism in 1917 when he published a paper claiming Fisher had failed to distinguish likelihood from inverse probability. This obviously annoyed Fisher, because from his own point of view he felt that he was right. However, Pearson was set to show him wrong. This caused a personal conflicting issue between them, which remained even after Pearson died in 1936. Fisher was also upset and annoyed by the misjudgments mathematicians and biologists made about his work. This was simply because there were some parts of his work which they were not competent to evaluate. Hence, Fisher became bitter because he suffered serious injustice. [15]

At that point the method of maximum likelihood was the talk of the town in statistics, and led to less interest in inverse probability methods. Fisher did say that the theory of inverse probability is founded upon an error, and must be wholly rejected.[5, p.9] By this he meant it is not appropriate to reverse the idea of probability to make estimations. However, we can still draw inferences about populations from knowledge of a sample.

Properties of statistics

The distribution of the statistics allows us to choose the most suitable statistics to use for the estimation. Hence, from the behavior of the distribution we can separate the statistics into groups. If we calculate a statistic, for example the mean of a very large sample, then it will be a much more accurate estimate of the population mean in comparison to an estimate obtained from a smaller sample.

As a result of this if the sample size gets larger and larger the difference between the statistics gets smaller and smaller. In fact, as the samples are made larger without limit, the statistic will usually tend to some fixed value characteristic of the population, and, therefore, expressible in terms of the parameters of the population. [5, p.9]

There is only one correct parametric function to which the statistic can be equated. However, if the statistics can be equated to another parametric function, then regardless of how large the sample is made it still will tend to the incorrect fixed value. These statistics are called inconsistent statistics. Conversely, consistent statistics are equated to the correct parametric function. Then as the sample size increases it will tend to the "correct" value. The errors between the actual value and the estimate tend to normal distribution. The mean value of the square of errors is the variance. The variance is inversely proportional to the sample size. Hence if we increase the sample size then the variance decreases with a constant of proportionality.

Suppose we calculate the mean (consistent statistics) of many random samples of the same size from a population. Clearly the variance of each consistent statistic will be different. Therefore the statistic with the smaller variance can be classed as a more "efficient" statistic.

RA Fisher [5, p.12] illustrated this idea by this example:

If from a large sample of (say) 1000 observations we calculate an efficient statistic, A, and a second consistent statistic, B, having twice the variance of A, then B will be a valid estimate of the required parameter, but one definitely inferior to A in its accuracy. Using the statistic B, a sample of 2000 values would be required to obtain as good an estimate as is obtained by using the statistic A from a sample of 1000 values. We may say, in this sense, that the. statistic B makes use Of 50 per cent of the relevant information available in the observations; or, briefly, that its efficiency is 50 per cent. The term "efficient" in its absolute sense is reserved for statistics the efficiency of which is 100 per cent.

Although all consistent estimates are not efficient they are still valuable estimates and can be adequately accurate in estimation problems. For large samples it can be shown that all efficient statistics tend to the same value. Therefore, it is okay to use any but one efficient statistics.

Furthermore, there are "sufficient statistics" that contain all the important information of observations from small samples. These statistics are definitely superior to other efficient statistics [5, p.14]. Hence when they exist they are much more valuable.

To conclude this chapter with the most important fact that efficient statistics can always be found by the method of maximum likelihood proved via R A Fishers work. Also this method gives sufficient statistics when they exist.

Conclusion

It is apparent that Sir Ronald Aylmer Fisher was a very capable individual who endured a very challenging journey in his lifetime. Confronted by many opponents he adamantly continued with his great work. Therefore he gained various recognitions from many intuitions throughout his life. Although his main contributions were in genetics his contributions in statistics where equally regarded.

The historical revolution of the method indicates a small but important section of Fisher's work. Which has created the foundation of Statistics that we know of today. His struggle and determination to understand analysis of experiments is held in high regards in statistical testing. The intense research that he carried out to elaborate his ideas were fundamental to the development of statistical theory. Evidence of this is present in industry today, where the use of his method of maximum likelihood is present in computer software. This allows one to analyse data and obtain the most accurate estimates possible. Therefore this method of estimation is superior in comparison to others.

That no one in the 20th century has had a greater impact on scientific research than Fisher is evident through his exceptional work. The ideas and concepts he introduced have allowed statisticians to understand estimation better. He contributed primarily to the philosophy of our age. Therefore I feel statistical science is a key to resolve the uncertainty we live with nowadays.

Article name: Extensive Biography Of Sir Ronald Aylmer Fisher Philosophy essay, research paper, dissertation