Wait a second!
More handpicked essays just for you.
More handpicked essays just for you.
Importance of statistics in daily life
Discussion of descriptive statistics
The advantages of descriptive statistics
Don’t take our word for it - see why 10 million students trust us with their essay needs.
Statistics are necessary for scientific research because they allow the researchers to analyze empirical data needed to interpret the findings and draw conclusions based on the results of the research. According to Portney and Watkins (2009), all studies require a description of subjects and responses that are obtained through measuring central tendency, so all studies use descriptive statistics to present an appropriate use of statistical tests and the validity of data interpretation. Although descriptive statistics do not allow general conclusions and allow only limited interpretations, they are useful for understanding the study sample and establishing an appropriate framework for the further analysis in the study. Further analysis using appropriate statistical methods allows the researchers to establish correlations between independent and dependent variables, define possible outcomes, and identify areas of potential study in the future accurately. Statistics is important for researchers because it allows them to investigate and interpret the data more accurately, and researchers will notice patterns in the data that would be overlooked otherwise and result in inaccurate and possibly subjective conclusions (Portney & Watkins, 2009).
Frequency distribution is a method used in descriptive statistics to arrange the values of one or multiple variables in a sample, so it will summarize the distribution of values in a sample. Frequency distribution is the most basic and frequently used method in statistics because it creates organized tables of data which can be used later to calculate averages or measure variability. The organized data frequency distribution provides continuous data that is easier to work with than raw data obtai...
... middle of paper ...
...loser to the population mean and the plot would display a normal curve because a sampling distribution always forms a normal curve (Portney & Watkins, 2009). When the frequency distribution graph shows a normal curve, it is possible to determine its variability and estimate the standard error of the mean in compliance with the sample data. Unlike probability, an estimate of the population distribution allows researchers to establish the probability of selecting a sample with a predictable mean. Although the sampling distribution for predicting single outcomes is not applicable in reality, sample data can be used to draw inferences about the entire population from one sample, but it is never used to measure variance directly.
However, sample data finds applications in several researches that require estimating unknown population parameters (Portney & Watkins, 2009)
The final chapter of this book encourages people to be critical when taking in statistics. Someone taking a critical approach to statistics tries assessing statistics by asking questions and researching the origins of a statistic when that information is not provided. The book ends by encouraging readers to know the limitations of statistics and understand how statistics are
A researcher determines that 42.7% of all downtown office buildings have ventilation problems. Is this a statistic or a parameter; explain your answer.
There are two histograms, showing information on GPA, and showing information on final grade. Histograms are commonly used with interval or ratio level data (Corty, 2007). The data in the GPA is distributed and slightly skewed to the right, which means it has a positive skew and has a peaked distribution. The final histogram also has a leptokurtic frequency distribution, but is skewed to the left meaning this has a negative skew.
Not a random sample. They sampled “well-off people”. That subset of people may be the most likely to purchase, but they aren’t the only ones who can/would purchase their
Inferential statistics establish the methods for the analyses used for conclusions drawing conclusions beyond the immediate data alone concerning an experiment or study for a population built on general conditions or data collected from a sample (Jackson, 2012; Trochim & Donnelly, 2008). With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone. For instance, we use inferential statistics to try to infer from the sample data what the population might think. A requisite for developing inferential statistics supports general linear models for sampling distribution of the outcome statistic; researchers use the related inferential statistics to determine confidence (Hopkins, Marshall, Batterham, & Hanin, 2009).
In the example above, the survey needed not only to be expanded but diversified. By including the women and other workers, you make the statistics more accurate because it represents the TV watching habits of ALL the company’s employees. However, if the company is very large, it would be difficult to interview every single employee. The solution to this problem is called random representative sampling.
The development of knowledge requires a number of processes in order to establish credible data to ensure the validity and appropriateness of how it can be used in the future. For the healthcare industry, this has provided the ability to create and form new types of interventions in order to give adequate care across a of number of fields within the system. Research then, has been an essential part in providing definitive data, either by disproving previous beliefs or confirming newly found data and methods. Moreover, research in itself contains its own process with a methodological approach. Of the notable methods, quantitative research is often used for its systemic approach (Polit & Beck, 2006). Thus, the use of the scientific method is used, which also utilizes the use of numerical data (Polit & Beck). Here, researches make use of creating surveys, scales, or placing a numerical value on it subjects (Polit & Beck). In the end the resulting data is neutral and statistical. However, like all things its approach is not perfect, yet, it has the ability to yield valuable data.
The authors of this article have outlined the purpose, aims, and objectives of the study. It also provides the methods used which is quantitative approach to collect the data, the results, conclusion of the study. It is important that the author should present the essential components of the study in the abstract because the abstract may be the only section that is read by readers to decide if the study is useful or not or to continue reading (Coughlan, Cronin, and Ryan, 2007; Ingham-Broomfield, 2008 p.104; Stockhausen and Conrick, 2002; Nieswiadomy, 2008 p.380).
Due to the invisibility of the population, a sampling frame can not be developed. Without the ...
...ferred because it produces meaningful information about each data point and where it falls within its normal distribution, plus provides a crude indicator of outliers. (Ben Etzkorn 2011).
The father of quantitative analysis, Rene Descartes, thought that in order to know and understand something, you have to measure it (Kover, 2008). Quantitative research has two main types of sampling used, probabilistic and purposive. Probabilistic sampling is when there is equal chance of anyone within the studied population to be included. Purposive sampling is used when some benchmarks are used to replace the discrepancy among errors. The primary collection of data is from tests or standardized questionnaires, structured interviews, and closed-ended observational protocols. The secondary means for data collection includes official documents. In this study, the data is analyzed to test one or more expressed hypotheses. Descriptive and inferential analyses are the two types of data analysis used and advance from descriptive to inferential. The next step in the process is data interpretation, and the goal is to give meaning to the results in regards to the hypothesis the theory was derived from. Data interpretation techniques used are generalization, theory-driven, and interpretation of theory (Gelo, Braakmann, Benetka, 2008). The discussion should bring together findings and put them into context of the framework, guiding the study (Black, Gray, Airasain, Hector, Hopkins, Nenty, Ouyang, n.d.). The discussion should include an interpretation of the results; descriptions of themes, trends, and relationships; meanings of the results, and the limitations of the study. In the conclusion, one wants to end the study by providing a synopsis and final comments. It should include a summary of findings, recommendations, and future research (Black, Gray, Airasain, Hector, Hopkins, Nenty, Ouyang, n.d.). Deductive reasoning is used in studies...
This chapter taught me the importance of understanding statistical data and how to evaluate it with common sense. Almost everyday we are subjected to statistical data in newspapers and on TV. My usual reaction was to accept those statistics as being valid. Which I think is a fair assessment for most people. However, reading this chapter opens my eyes to the fact that statistical data can be very misleading. It shows how data can be skewed to support a certain group’s agenda. Although most statistical data presented may not seem to affect us personally in our daily lives, it can however have an impact. For example, statistics can influence the way people vote on certain issues.
The Collier Encyclopedia’s definition for probability is the concern for events that are not certain and the reasonableness of one expectation over another. These expectations are usually based on some facts about past events or what is known as statistics. Collier describes statistics to be the science of the classification and manipulation of data in order to draw inferences. Inferences here can be read to mean expectations, leading to the conclusion that the two go hand in hand in accomplishing what mankind has tried to accomplish since the beginning of time – predicting the future. It is the notion of science that this is the most accurate way to predict events yet to occur and this has lead to it being the most widely accepted “fortune telling'; tool in the world today.
The Gaussian distribution—a function that tells the probability that any real observation will fall between any two real limits or real numbers, as the curve approaches zero on either side. It is a very commonly occurring continuous probability distribution. In theory, Gaussian distributions are extremely important in statistics and are often used in the natural and social sciences for real-valued random variables whose distributions are not known. Gaussian distributions are also sometimes referred as Bell curve or normal distribution.
Researchers, professionals and others use statistics to prove their claims or findings. Even though statistics are not an absolute fact because the conclusion is mostly drawn from a sample group – representative of a specific population subjected to the research, it is commonly used as the basis of decision making or alternating choices in daily living, studies, works, scientific research, politics and other planning. The inventor of a documentary film called “An inconvenient truth”, Mr. Al Gore, for instance, in his campaign to educate people about the climate change, used statistics to alert people that everyone on earth is polluting the environment and should participate in solving the problem. He collected data from many different countries with an in...