64 resultados para NCHS data brief (Series)
Resumo:
This report provides an introduction to our analyses of secondary data with respect to violent acts and incidents relating to males living in rural settings in Australia. It clarifies important aspects of our overall approach primarily by concentrating on three elements that required early scoping and resolution. Firstly, a wide and inclusive view of violence which encompasses measures of violent acts and incidents and also data identifying risk taking behaviour and the consequences of violence is outlined and justified. Secondly, the classification used to make comparisons between the city and the bush together with associated caveats is outlined. The third element discussed is in relation to national injury data. Additional commentary resulting from exploration, examination and analyses of secondary data is published online in five subsequent reports in this series.
Resumo:
This report focuses on our examination of extant data which have been sourced with respect to self-harm and suicide in Australia. Moreover, specific areas of concern regarding elevated rates of suicide for rural males and data anomalies which emerged during our examination of these data are discussed. Additional commentary resulting from exploration, examination and analyses of secondary data is published online in complementary reports in this series.
Resumo:
This report focuses on our examination of extant data which have been sourced with respect to intentional violence perpetrated or experienced by males in regional and remote Australia. The nature of intentional violent acts can be physical, sexual or psychological or involve deprivation or neglect. We have presented under the headings of: self-harm including suicide; homicide; assault, sexual assault and the threat of assault; child abuse; other family and intimate partner violence; harassment, stalking and bullying; alcohol related social violence; and animal abuse. State variations in interpersonal violence are also presented. Additional commentary resulting from exploration, examination and analyses of secondary data is published online in complementary reports in this series.
Resumo:
This report focuses on our examination of extant data which have been sourced with respect to unintentional serious and violent injuries to males living in regional and remote Australia. Such injuries typically might be caused by, for example, transport accidents, occupational exposures and hazards, burns and so on. Thus unintentional violent incidents cause physical trauma the consequences of which can sometimes lead to chronic conditions including psychological harm or substance abuse. Additional commentary resulting from exploration, examination and analyses of secondary data is published online in complementary reports in this series.
Resumo:
This report focuses on our examination of extant data which have been sourced with respect to personally and socially risky behaviour associated with males living in regional and remote Australia . The AIHW (2008: PHE 97:89) defines personally risky behaviour, on the one hand, as working, swimming, boating, driving or operating hazardous machinery while intoxicated with alcohol or an illicit drug. Socially risky behaviour, on the other hand, is defined as creating a public disturbance, damaging property, stealing or verbally or physically abusing someone while intoxicated with alcohol or an illicit drug. Additional commentary resulting from exploration, examination and analyses of secondary data is published online in complementary reports in this series.
Resumo:
This report considers extant data which have been sourced with respect to some of the consequences of violent acts and incidents and risky behaviour for males living in regional and remote Australia . This has been collated and presented under the headings: juvenile offenders; long-term health consequences; anxiety and repression; and other chronic disabilities. Additional commentary resulting from exploration, examination and analyses of secondary data is published online in complementary reports in this series.
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as training target and training input to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and ultimately organisation competitiveness.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR() -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA() -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR() -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR() -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and LjungBox tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The LjungBox tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollarBritish pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.
Resumo:
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called season. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.
Resumo:
Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Nio/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Nio/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 312 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (=0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.
Resumo:
Background Up to one-third of people affected by cancer experience ongoing psychological distress and would benefit from screening followed by an appropriate level of psychological intervention. This rarely occurs in routine clinical practice due to barriers such as lack of time and experience. This study investigated the feasibility of community-based telephone helpline operators screening callers affected by cancer for their level of distress using a brief screening tool (Distress Thermometer), and triaging to the appropriate level of care using a tiered model. Methods Consecutive cancer patients and carers who contacted the helpline from September-December 2006 (n = 341) were invited to participate. Routine screening and triage was conducted by helpline operators at this time. Additional socio-demographic and psychosocial adjustment data were collected by telephone interview by research staff following the initial call. Results The Distress Thermometer had good overall accuracy in detecting general psychosocial morbidity (Hospital Anxiety and Depression Scale cut-off score 15) for cancer patients (AUC = 0.73) and carers (AUC = 0.70). We found 73% of participants met the Distress Thermometer cut-off for distress caseness according to the Hospital Anxiety and Depression Scale (a score 4), and optimal sensitivity (83%, 77%) and specificity (51%, 48%) were obtained with cut-offs of 4 and 6 in the patient and carer groups respectively. Distress was significantly associated with the Hospital Anxiety and Depression Scale scores (total, as well as anxiety and depression subscales) and level of care in cancer patients, as well as with the Hospital Anxiety and Depression Scale anxiety subscale for carers. There was a trend for more highly distressed callers to be triaged to more intensive care, with patients with distress scores 4 more likely to receive extended or specialist care. Conclusions Our data suggest that it was feasible for community-based cancer helpline operators to screen callers for distress using a brief screening tool, the Distress Thermometer, and to triage callers to an appropriate level of care using a tiered model. The Distress Thermometer is a rapid and non-invasive alternative to longer psychometric instruments, and may provide part of the solution in ensuring distressed patients and carers affected by cancer are identified and supported appropriately.
Resumo:
This paper proposes an innovative instance similarity based evaluation metric that reduces the search map for clustering to be performed. An aggregate global score is calculated for each instance using the novel idea of Fibonacci series. The use of Fibonacci numbers is able to separate the instances effectively and, in hence, the intra-cluster similarity is increased and the inter-cluster similarity is decreased during clustering. The proposed FIBCLUS algorithm is able to handle datasets with numerical, categorical and a mix of both types of attributes. Results obtained with FIBCLUS are compared with the results of existing algorithms such as k-means, x-means expected maximization and hierarchical algorithms that are widely used to cluster numeric, categorical and mix data types. Empirical analysis shows that FIBCLUS is able to produce better clustering solutions in terms of entropy, purity and F-score in comparison to the above described existing algorithms.
Resumo:
Several authors stress the importance of datas crucial foundation for operational, tactical and strategic decisions (e.g., Redman 1998, Tee et al. 2007). Data provides the basis for decision making as data collection and processing is typically associated with reducing uncertainty in order to make more effective decisions (Daft and Lengel 1986). While the first series of investments of Information Systems/Information Technology (IS/IT) into organizations improved data collection, restricted computational capacity and limited processing power created challenges (Simon 1960). Fifty years on, capacity and processing problems are increasingly less relevant; in fact, the opposite exists. Determining data relevance and usefulness is complicated by increased data capture and storage capacity, as well as continual improvements in information processing capability. As the IT landscape changes, businesses are inundated with ever-increasing volumes of data from both internal and external sources available on both an ad-hoc and real-time basis. More data, however, does not necessarily translate into more effective and efficient organizations, nor does it increase the likelihood of better or timelier decisions. This raises questions about what data managers require to assist their decision making processes.