205 resultados para statistical reports
em Queensland University of Technology - ePrints Archive
Resumo:
Open the sports or business section of your daily newspaper, and you are immediately bombarded with an array of graphs, tables, diagrams, and statistical reports that require interpretation. Across all walks of life, the need to understand statistics is fundamental. Given that our youngsters’ future world will be increasingly data laden, scaffolding their statistical understanding and reasoning is imperative, from the early grades on. The National Council of Teachers of Mathematics (NCTM) continues to emphasize the importance of early statistical learning; data analysis and probability was the Council’s professional development “Focus of the Year” for 2007–2008. We need such a focus, especially given the results of the statistics items from the 2003 NAEP. As Shaughnessy (2007) noted, students’ performance was weak on more complex items involving interpretation or application of items of information in graphs and tables. Furthermore, little or no gains were made between the 2000 NAEP and the 2003 NAEP studies. One approach I have taken to promote young children’s statistical reasoning is through data modeling. Having implemented in grades 3 –9 a number of model-eliciting activities involving working with data (e.g., English 2010), I observed how competently children could create their own mathematical ideas and representations—before being instructed how to do so. I thus wished to introduce data-modeling activities to younger children, confi dent that they would likewise generate their own mathematics. I recently implemented data-modeling activities in a cohort of three first-grade classrooms of six year- olds. I report on some of the children’s responses and discuss the components of data modeling the children engaged in.
Resumo:
Statistical reports of SMEs Internet usage from various countries indicate a steady growth. However, deeper investigation of SME’s e-commerce adoption and usage reveals that a number of SMEs fail to realize the full potential of e-commerce. Factors such as lack of tools and models in Information Systems and Information Technology for SMEs, and lack of technical expertise and specialized knowledge within and outside the SME have the most effect. This study aims to address the two important factors in two steps. First, introduce the conceptual tool for intuitive interaction. Second, explain the implementation process of the conceptual tool with the help of a case study. The subject chosen for the case study is a real estate SME from India. The design and development process of the website for the real estate SME was captured in this case study and the duration of the study was four months. Results indicated specific benefits for web designers and SME business owners. Results also indicated that the conceptual tool is easy to use without the need for technical expertise and specialized knowledge.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
To enhance workplace safety in the construction industry it is important to understand interrelationships among safety risk factors associated with construction accidents. This study incorporates the systems theory into Heinrich’s domino theory to explore the interrelationships of risks and break the chain of accident causation. Through both empirical and statistical analyses of 9,358 accidents which occurred in the U.S. construction industry between 2002 and 2011, the study investigates relationships between accidents and injury elements (e.g., injury type, part of body, injury severity) and the nature of construction injuries by accident type. The study then discusses relationships between accidents and risks, including worker behavior, injury source, and environmental condition, and identifies key risk factors and risk combinations causing accidents. The research outcomes will assist safety managers to prioritize risks according to the likelihood of accident occurrence and injury characteristics, and pay more attention to balancing significant risk relationships to prevent accidents and achieve safer working environments.
Resumo:
The relationship between mathematics and statistical reasoning frequently receives comment (Vere-Jones 1995, Moore 1997); however most of the research into the area tends to focus on mathematics anxiety. Gnaldi (2003) showed that in a statistics course for psychologists, the statistical understanding of students at the end of the course depended on students’ basic numeracy, rather than the number or level of previous mathematics courses the student had undertaken. As part of a study into the development of statistical thinking at the interface between secondary and tertiary education, students enrolled in an introductory data analysis subject were assessed regarding their statistical reasoning, basic numeracy skills, mathematics background and attitudes towards statistics. This work reports on some key relationships between these factors and in particular the importance of numeracy to statistical reasoning.
Resumo:
The relationship between mathematics and statistical reasoning frequently receives comment (Vere-Jones 1995, Moore 1997); however most of the research into the area tends to focus on maths anxiety. Gnaldi (Gnaldi 2003) showed that in a statistics course for psychologists, the statistical understanding of students at the end of the course depended on students’ basic numeracy, rather than the number or level of previous mathematics courses the student had undertaken. As part of a study into the development of statistical thinking at the interface between secondary and tertiary education, students enrolled in an introductory data analysis subject were assessed regarding their statistical reasoning ability, basic numeracy skills and attitudes towards statistics. This work reports on the relationships between these factors and in particular the importance of numeracy to statistical reasoning.
Resumo:
The information on climate variations is essential for the research of many subjects, such as the performance of buildings and agricultural production. However, recorded meteorological data are often incomplete. There may be a limited number of locations recorded, while the number of recorded climatic variables and the time intervals can also be inadequate. Therefore, the hourly data of key weather parameters as required by many building simulation programmes are typically not readily available. To overcome this gap in measured information, several empirical methods and weather data generators have been developed. They generally employ statistical analysis techniques to model the variations of individual climatic variables, while the possible interactions between different weather parameters are largely ignored. Based on a statistical analysis of 10 years historical hourly climatic data over all capital cities in Australia, this paper reports on the finding of strong correlations between several specific weather variables. It is found that there are strong linear correlations between the hourly variations of global solar irradiation (GSI) and dry bulb temperature (DBT), and between the hourly variations of DBT and relative humidity (RH). With an increase in GSI, DBT would generally increase, while the RH tends to decrease. However, no such a clear correlation can be found between the DBT and atmospheric pressure (P), and between the DBT and wind speed. These findings will be useful for the research and practice in building performance simulation.
Using the Hofstede-Gray Framework to Argue Normatively for an Extension of Islamic Corporate Reports