Descriptive and Inferential Statistics
Two of the most useful types of statistics are known as descriptive and inferential statistics. Descriptive statistics is the term that is used to describe the analysis done to summarize the data from a population in a meaningful way; typically, through graphs and charts. On the other hand, inferential statistics is a way of making generalizations about a population of interest from a small sample size (Descriptive and Inferential Statistics, n.d.).
Probability Theory Probability theory is one of the most well utilized theories of inferential statics. Probability theory is the branch of mathematics that focuses on analyzing the outcome of a random event and is often utilized as relative frequencies in
…show more content…
This can be demonstrated by analyzing the data from the Physician’s Reactions case study. This study looked at the effect of how doctors treat overweight patients compared to their average weight counterparts. The mean amount of time doctors indicated that they would spend with overweight patients was 24.73 minutes with a standard deviation of 9.65 (Lane, D., n.d.).
Another way to apply data that has been summarized would be to calculate probability. If one wanted to find out the probability that a doctor would spend over 31minutes with one of the simulated 38 overweight patients, then you could by dividing the number of simulated patients who had a doctor indicate they would spend more than 31 minutes with them by the total number of patients (2/38 = 0.0526 or 5.26%). One can even find the same probability from the same set of data if a normal distribution is assumed (p=0.258) (Lane, D., n.d.).
Standard Normal Distribution One of the more unique ways to summarize a data set is with a standardized (z) score. A z score is a way to indicate how many standard deviations above or below the mean a data point is. An example of this would be a student receiving a z score of -.57 on a test. The student could infer that they had scored below the mean of the class by .57 standard deviations (Lane, D.,
The final chapter of this book encourages people to be critical when taking in statistics. Someone taking a critical approach to statistics tries assessing statistics by asking questions and researching the origins of a statistic when that information is not provided. The book ends by encouraging readers to know the limitations of statistics and understand how statistics are
Answer: The evaluated group of 500 patients within this study is considered to be a sample. The 500 patients whom possess high cholesterol are comprised of the larger group of patients of which serve holistically as the population. The 500 patients randomly selected from the total population with high cholesterol of which 67% were found with heart disease constitute as the sample.
We need to acknowledge that our methods to control overweight and obesity may commence, but must not conclude with individual accountability. Only a number of diseases require a general approach, other than the effort to hold and decrease the levels of overweight and obesity, and in few places are the stakes higher. Employers seem to have accepted this and are attempting to develop programs to address it.
Inferential Statistics has two approaches for making inferences about parameters. The first approach is the parametric method. The parametric method either knows or assumes that the data comes from a known type of probability distribution. There are many well-known distributions that parametric methods can be used, such as the Normal distribution, Chi-Square distribution, and the Student T distribution. If the underlying distribution is known, then the data can be tested accordingly. However, most data does not have a known underlying distribution. In order to test the data parametrically, there must be certain assumptions made. Some assumptions are all populations must be normal or at least same distribution, and all populations must have the same error variance. If these assumptions are correct, the parametric test will yield more accurate and precise estimates of the parameters being tested. If these assumptions are incorrect, the test will have a very low statistical power. This will reduce the probability of rejecting the null hypothesis when the alternative hypothesis is true. So what happens with the data is definitely known not to fit any distribution? This is when nonparametric methods are used.
Stearns, J. M., Borna, S., Sundaram, S. (2001). The effects of obesity, gender and specialty on perceptions of physicians’ social influence. Journal of Services Marketing 15(3), 240-250.
Many theories of logic use mathematical terms to show how premises lead to conclusions. The Bayesian confirmation theory relates directly to probability. When applying this theory, a logician must know the probability of a given situation, have a conditional rule, and then he or she must apply the probability when the conditional rule is applied. This theory is used to determine an outcome based on a given condition. The probability of a given situation is x, when y occurs, or the probability is z if it does not occur. If y occurs, then the outcome of the given would be x. For example, if there is a high probability that a storm will occur if a given temperature drops and there is no temperature change, then it will most likely not rain because the temperature did not change (Strevens, 2012). By using observational data such as weather patterns, a person can arrive at a logical prediction or conclusion that will most likely come true based...
Inferential statistics establish the methods for the analyses used for conclusions drawing conclusions beyond the immediate data alone concerning an experiment or study for a population built on general conditions or data collected from a sample (Jackson, 2012; Trochim & Donnelly, 2008). With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone. For instance, we use inferential statistics to try to infer from the sample data what the population might think. A requisite for developing inferential statistics supports general linear models for sampling distribution of the outcome statistic; researchers use the related inferential statistics to determine confidence (Hopkins, Marshall, Batterham, & Hanin, 2009).
"Treating Obesity Vital For Public Health, Physicians Say." Science Daily. 2006. Web. 10 May 2014. .
Greenberg, Bradley. Eastin, Matthew. “Portrayals of overweight and obese individuals on commercial television” American Journal of Public Health 98.3 (Aug 2003): 1342-8. ProQuest. Web. 12/26/2013
Standard Deviation is a measure about how spreads the numbers are. It describes the dispersion of a data set from its mean. If the dispersion of the data set is higher from the mean value, then the deviation is also higher. It is expressed as the Greek letter Sigma (σ).
e Approach to a Growing Problem. Lanham, MD: Rowman & Littlefield Publishers, Inc. Haslam, D. (2007). Obesity: a medical history. Obesity Reviews, 8, 31-36. Hawn, C. (2009).
The father of quantitative analysis, Rene Descartes, thought that in order to know and understand something, you have to measure it (Kover, 2008). Quantitative research has two main types of sampling used, probabilistic and purposive. Probabilistic sampling is when there is equal chance of anyone within the studied population to be included. Purposive sampling is used when some benchmarks are used to replace the discrepancy among errors. The primary collection of data is from tests or standardized questionnaires, structured interviews, and closed-ended observational protocols. The secondary means for data collection includes official documents. In this study, the data is analyzed to test one or more expressed hypotheses. Descriptive and inferential analyses are the two types of data analysis used and advance from descriptive to inferential. The next step in the process is data interpretation, and the goal is to give meaning to the results in regards to the hypothesis the theory was derived from. Data interpretation techniques used are generalization, theory-driven, and interpretation of theory (Gelo, Braakmann, Benetka, 2008). The discussion should bring together findings and put them into context of the framework, guiding the study (Black, Gray, Airasain, Hector, Hopkins, Nenty, Ouyang, n.d.). The discussion should include an interpretation of the results; descriptions of themes, trends, and relationships; meanings of the results, and the limitations of the study. In the conclusion, one wants to end the study by providing a synopsis and final comments. It should include a summary of findings, recommendations, and future research (Black, Gray, Airasain, Hector, Hopkins, Nenty, Ouyang, n.d.). Deductive reasoning is used in studies...
2. Willett WC, Dietz WH, Colditz GA. Primary Care: Guidelines for healthy weight. New England Journal of Medicine. 1999;341:427-434.
Quantitative methods in the social sciences are an effective tool for understanding patterns and variation in social data. They are the systematic, numeric collection and objective analysis of data that can be generalized to a larger population and seek to find cause in variance (Matthews and Ross 2010, p.141; Henn et al. 2009, p.134). These methods are often debated, but quantitative measurement is important to the social sciences because of the numeric evidence that can be used to drive more in depth qualitative research and to focus regional policy, to name a few (Johnston et al. 2014). Basic quantitative methods, such as descriptive and inferential statistics, are used regularly to identify and explain large social trends that can then
Whether or not people notice the importance of statistics, people is using them in their everyday life. Statistics have been more and more important for different cohorts of people from a farmer to an academician and a politician. For example, Cambodian famers produce an average of three tons or rice per hectare, about eighty per cent of Cambodian population is a farmer, at least two million people support party A, and so on. According to the University of Melbourne, statistics are about to make conclusive estimates about the present or to predict the future (The University of Melbourne, 2009). Because of their significance, statistics are used for different purposes. Statistics are not always trustable, yet they depend on their reliable factors such as sample, data collection methods and sources of data. This essay will discuss how people can use statistics to present facts or to delude others. Then, it will discuss some of the criteria for a reliable statistic interpretation.