851 resultados para statistical methods
Resumo:
Despite promising benefits and advantages, there are reports of failures and low realisation of benefits in Enterprise System (ES) initiatives. Among the research on the factors that influence ES success, there is a dearth of studies on the knowledge implications of multiple end-user groups using the same ES application. An ES facilitates the work of several user groups, ranging from strategic management, management, to operational staff, all using the same system for multiple objectives. Given the fundamental characteristics of ES – integration of modules, business process views, and aspects of information transparency – it is necessary that all frequent end-users share a reasonable amount of common knowledge and integrate their knowledge to yield new knowledge. Recent literature on ES implementation highlights the importance of Knowledge Integration (KI) for implementation success. Unfortunately, the importance of KI is often overlooked and little about the role of KI in ES success is known. Many organisations do not achieve the potential benefits from their ES investment because they do not consider the need or their ability to integrate their employees’ knowledge. This study is designed to improve our understanding of the influence of KI among ES end-users on operational ES success. The three objectives of the study are: (I) to identify and validate the antecedents of KI effectiveness, (II) to investigate the impact of KI effectiveness on the goodness of individuals’ ES-knowledge base, and (III) to examine the impact of the goodness of individuals’ ES-knowledge base on the operational ES success. For this purpose, we employ the KI factors identified by Grant (1996) and an IS-impact measurement model from the work of Gable et al. (2008) to examine ES success. The study derives its findings from data gathered from six Malaysian companies in order to obtain the three-fold goal of this thesis as outlined above. The relationships between the antecedents of KI effectiveness and its consequences are tested using 188 responses to a survey representing the views of management and operational employment cohorts. Using statistical methods, we confirm three antecedents of KI effectiveness and the consequences of the antecedents on ES success are validated. The findings demonstrate a statistically positive impact of KI effectiveness of ES success, with KI effectiveness contributing to almost one-third of ES success. This research makes a number of contributions to the understanding of the influence of KI on ES success. First, based on the empirical work using a complete nomological net model, the role of KI effectiveness on ES success is evidenced. Second, the model provides a theoretical lens for a more comprehensive understanding of the impact of KI on the level of ES success. Third, restructuring the dimensions of the knowledge-based theory to fit the context of ES extends its applicability and generalisability to contemporary Information Systems. Fourth, the study develops and validates measures for the antecedents of KI effectiveness. Fifth, the study demonstrates the statistically significant positive influence of the goodness of KI on ES success. From a practical viewpoint, this study emphasises the importance of KI effectiveness as a direct antecedent of ES success. Practical lessons can be drawn from the work done in this study to empirically identify the critical factors among the antecedents of KI effectiveness that should be given attention.
Resumo:
Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants and they made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.
Resumo:
This paper considers VECMs for variables exhibiting cointegration and common features in the transitory components. While the presence of cointegration between the permanent components of series reduces the rank of the long-run multiplier matrix, a common feature among the transitory components leads to a rank reduction in the matrix summarizing short-run dynamics. The common feature also implies that there exists linear combinations of the first-differenced variables in a cointegrated VAR that are white noise and traditional tests focus on testing for this characteristic. An alternative, however, is to test the rank of the short-run dynamics matrix directly. Consequently, we use the literature on testing the rank of a matrix to produce some alternative test statistics. We also show that these are identical to one of the traditional tests. The performance of the different methods is illustrated in a Monte Carlo analysis which is then used to re-examine an existing empirical study. Finally, this approach is applied to provide a check for the presence of common dynamics in DSGE models.
Resumo:
Load in distribution networks is normally measured at the 11kV supply points; little or no information is known about the type of customers and their contributions to the load. This paper proposes statistical methods to decompose an unknown distribution feeder load to its customer load sector/subsector profiles. The approach used in this paper should assist electricity suppliers in economic load management, strategic planning and future network reinforcements.
Resumo:
A significant number of patients diagnosed with primary brain tumours report unmet information needs. Using concept mapping methodology, this study aimed to identify strategies for improving information provision, and to describe factors that health professionals understood to influence their provision of information to patients with brain tumours and their families. Concept mapping is a mixed methods approach that uses statistical methods to represent participants’ perceived relationships between elements as conceptual maps. These maps, and results of associated data collection and analyses, are used to extract concepts involved in information provision to these patients. Thirty health professionals working across a range of neuro-oncology roles and settings participated in the concept mapping process. Participants rated a care coordinator as the most important strategy for improving brain tumour care, with psychological support as a whole rated as the most important element of care. Five major themes were identified as facilitating information provision: health professionals’ communication skills, style and attitudes; patients’ needs and preferences; perceptions of patients’ need for protection and initiative; rapport and continuity between patients and health professionals; and the nature of the health care system. Overall, health professionals conceptualised information provision as ‘individualised’, dependent on these interconnected personal and environmental factors.
Resumo:
Most studies of in vitro fertilisation (IVF) outcomes use cycle-based data and fail to account for women who use repeated IVF cycles. The objective of this study was to examine the association between the number of eggs collected (EC) and the percentage fertilised normally, and women’s self-reported medical, personal and social histories. This study involved a crosssectional survey of infertile women (aged 27-46 years) recruited from four privately-owned fertility clinics located in major cities of Australia. Regression modeling was used to estimate the mean EC and mean percentage of eggs fertilised normally: adjusted for age at EC. Appropriate statistical methods were used to take account of repeated IVF cycles by the same women. Among 121 participants who returned the survey and completed 286 IVF cycles, the mean age at EC was 35.2 years (SD 4.5). Women’s age at EC was strongly associated with the number of EC: <30 years, 11.7 EC; 30.0-< 35 years, 10.6 EC; 35.0-<40.0 years, 7.3 EC; 40.0+ years, 8.1 EC; p<.0001. Prolonged use of oral contraceptives was associated with lower numbers of EC: never used, 14.6 EC; 0-2 years, 11.7 EC; 3-5 years, 8.5 EC; 6þ years, 8.2 EC; p=.04. Polycystic ovary syndrome (PCOS) was associated with more EC: have PCOS, 11.5 EC; no, 8.3 EC; p=.01. Occupational exposures may be detrimental to normal fertilisation: professional roles, 58.8%; trade and service roles, 51.8%; manual and other roles, 63.3%; p=.02. In conclusion, women’s age remains the most significant characteristic associated with EC but not the percentage of eggs fertilised normally.
Resumo:
Background On-site wastewater treatment system (OWTS) siting, design and management has traditionally been based on site specific conditions with little regard to the surrounding environment or the cumulative effect of other systems in the environment. The general approach has been to apply the same framework of standards and regulations to all sites equally, regardless of the sensitivity, or lack thereof, to the receiving environment. Consequently, this has led to the continuing poor performance and failure of on-site systems, resulting in environmental and public health consequences. As a result, there is increasing realisation that more scientifically robust evaluations in regard to site assessment and the underlying ground conditions are needed. Risk-based approaches to on-site system siting, design and management are considered the most appropriate means of improvement to the current standards and codes for on-site wastewater treatment systems. The Project Research in relation to this project was undertaken within the Gold Coast City Council region, the major focus being the semi-urban, rural residential and hinterland areas of the city that are not serviced by centralised treatment systems. The Gold Coast has over 15,000 on-site systems in use, with approximately 66% being common septic tank-subsurface dispersal systems. A recent study evaluating the performance of these systems within the Gold Coast area showed approximately 90% were not meeting the specified guidelines for effluent treatment and dispersal. The main focus of this research was to incorporate strong scientific knowledge into an integrated risk assessment process to allow suitable management practices to be set in place to mitigate the inherent risks. To achieve this, research was undertaken focusing on three main aspects involved with the performance and management of OWTS. Firstly, an investigation into the suitability of soil for providing appropriate effluent renovation was conducted. This involved detailed soil investigations, laboratory analysis and the use of multivariate statistical methods for analysing soil information. The outcomes of these investigations were developed into a framework for assessing soil suitability for effluent renovation. This formed the basis for the assessment of OWTS siting and design risks employed in the developed risk framework. Secondly, an assessment of the environmental and public health risks was performed specifically related the release of contaminants from OWTS. This involved detailed groundwater and surface water sampling and analysis to assess the current and potential risks of contamination throughout the Gold Coast region. Additionally, the assessment of public health risk incorporated the use of bacterial source tracking methods to identify the different sources of fecal contamination within monitored regions. Antibiotic resistance pattern analysis was utilised to determine the extent of human faecal contamination, with the outcomes utilised for providing a more indicative public health assessment. Finally, the outcomes of both the soil suitability assessment and ground and surface water monitoring was utilised for the development of the integrated risk framework. The research outcomes achieved through this project enabled the primary research aims and objects to be accomplished. This in turn would enable Gold Coast City Council to provide more appropriate assessment and management guidelines based on robust scientific knowledge which will ultimately ensure that the potential environmental and public health impacts resulting from on-site wastewater treatment is minimised. As part of the implementation of suitable management strategies, a critical point monitoring program (CPM) was formulated. This entailed the identification of the key critical parameters that contribute to the characterised risks at monitored locations within the study area. The CPM will allow more direct procedures to be implemented, targeting the specific hazards at sensitive areas throughout Gold Coast region.
Resumo:
Population-wide associations between loci due to linkage disequilibrium can be used to map quantitative trait loci (QTL) with high resolution. However, spurious associations between markers and QTL can also arise as a consequence of population stratification. Statistical methods that cannot differentiate between loci associations due to linkage disequilibria from those caused in other ways can render false-positive results. The transmission-disequilibrium test (TDT) is a robust test for detecting QTL. The TDT exploits within-family associations that are not affected by population stratification. However, some TDTs are formulated in a rigid-form, with reduced potential applications. In this study we generalize TDT using mixed linear models to allow greater statistical flexibility. Allelic effects are estimated with two independent parameters: one exploiting the robust within-family information and the other the potentially biased between-family information. A significant difference between these two parameters can be used as evidence for spurious association. This methodology was then used to test the effects of the fourth melanocortin receptor (MC4R) on production traits in the pig. The new analyses supported the previously reported results; i.e., the studied polymorphism is either causal of in very strong linkage disequilibrium with the causal mutation, and provided no evidence for spurious association.
Resumo:
A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
Purpose This Study evaluated the predictive validity of three previously published ActiGraph energy expenditure (EE) prediction equations developed for children and adolescents. Methods A total of 45 healthy children and adolescents (mean age: 13.7 +/- 2.6 yr) completed four 5-min activity trials (normal walking. brisk walking, easy running, and fast running) in ail indoor exercise facility. During each trial, participants were all ActiGraph accelerometer oil the right hip. EE was monitored breath by breath using the Cosmed K4b(2) portable indirect calorimetry system. Differences and associations between measured and predicted EE were assessed using dependent t-tests and Pearson correlations, respectively. Classification accuracy was assessed using percent agreement, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve. Results None of the equations accurately predicted mean energy expenditure during each of the four activity trials. Each equation, however, accurately predicted mean EE in at least one activity trial. The Puyau equation accurately predicted EE during slow walking. The Trost equation accurately predicted EE during slow running. The Freedson equation accurately predicted EE during fast running. None of the three equations accurately predicted EE during brisk walking. The equations exhibited fair to excellent classification accuracy with respect to activity intensity. with the Trost equation exhibiting the highest classification accuracy and the Puyau equation exhibiting the lowest. Conclusions These data suggest that the three accelerometer prediction equations do not accurately predict EE on a minute-by-minute basis in children and adolescents during overground walking and running. The equations maybe, however, for estimating participation in moderate and vigorous activity.
Resumo:
Both environmental economists and policy makers have shown a great deal of interest in the effect of pollution abatement on environmental efficiency. In line with the modern resources available, however, no contribution is brought to the environmental economics field with the Markov chain Monte Carlo (MCMC) application, which enables simulation from a distribution of a Markov chain and simulating from the chain until it approaches equilibrium. The probability density functions gained prominence with the advantages over classical statistical methods in its simultaneous inference and incorporation of any prior information on all model parameters. This paper concentrated on this point with the application of MCMC to the database of China, the largest developing country with rapid economic growth and serious environmental pollution in recent years. The variables cover the economic output and pollution abatement cost from the year 1992 to 2003. We test the causal direction between pollution abatement cost and environmental efficiency with MCMC simulation. We found that the pollution abatement cost causes an increase in environmental efficiency through the algorithm application, which makes it conceivable that the environmental policy makers should make more substantial measures to reduce pollution in the near future.
Resumo:
Introduction: Built environment interventions designed to reduce non-communicable diseases and health inequity, complement urban planning agendas focused on creating more ‘liveable’, compact, pedestrian-friendly, less automobile dependent and more socially inclusive cities.However, what constitutes a ‘liveable’ community is not well defined. Moreover, there appears to be a gap between the concept and delivery of ‘liveable’ communities. The recently funded NHMRC Centre of Research Excellence (CRE) in Healthy Liveable Communities established in early 2014, has defined ‘liveability’ from a social determinants of health perspective. Using purpose-designed multilevel longitudinal data sets, it addresses five themes that address key evidence-base gaps for building healthy and liveable communities. The CRE in Healthy Liveable Communities seeks to generate and exchange new knowledge about: 1) measurement of policy-relevant built environment features associated with leading non-communicable disease risk factors (physical activity, obesity) and health outcomes (cardiovascular disease, diabetes) and mental health; 2) causal relationships and thresholds for built environment interventions using data from longitudinal studies and natural experiments; 3) thresholds for built environment interventions; 4) economic benefits of built environment interventions designed to influence health and wellbeing outcomes; and 5) factors, tools, and interventions that facilitate the translation of research into policy and practice. This evidence is critical to inform future policy and practice in health, land use, and transport planning. Moreover, to ensure policy-relevance and facilitate research translation, the CRE in Healthy Liveable Communities builds upon ongoing, and has established new, multi-sector collaborations with national and state policy-makers and practitioners. The symposium will commence with a brief introduction to embed the research within an Australian health and urban planning context, as well as providing an overall outline of the CRE in Healthy Liveable Communities, its structure and team. Next, an overview of the five research themes will be presented. Following these presentations, the Discussant will consider the implications of the research and opportunities for translation and knowledge exchange. Theme 2 will establish whether and to what extent the neighbourhood environment (built and social) is causally related to physical and mental health and associated behaviours and risk factors. In particular, research conducted as part of this theme will use data from large-scale, longitudinal-multilevel studies (HABITAT, RESIDE, AusDiab) to examine relationships that meet causality criteria via statistical methods such as longitudinal mixed-effect and fixed-effect models, multilevel and structural equation models; analyse data on residential preferences to investigate confounding due to neighbourhood self-selection and to use measurement and analysis tools such as propensity score matching and ‘within-person’ change modelling to address confounding; analyse data about individual-level factors that might confound, mediate or modify relationships between the neighbourhood environment and health and well-being (e.g., psychosocial factors, knowledge, perceptions, attitudes, functional status), and; analyse data on both objective neighbourhood characteristics and residents’ perceptions of these objective features to more accurately assess the relative contribution of objective and perceptual factors to outcomes such as health and well-being, physical activity, active transport, obesity, and sedentary behaviour. At the completion of the Theme 2, we will have demonstrated and applied statistical methods appropriate for determining causality and generated evidence about causal relationships between the neighbourhood environment, health, and related outcomes. This will provide planners and policy makers with a more robust (valid and reliable) basis on which to design healthy communities.
Resumo:
Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.