265 resultados para Methods : Data Analysis
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
For research students and early career academics, universities offer resources that help develop their research skills – for example, critical theory development, literature reviews, research methods, data analysis, writing skills, etc. However, building a successful career in academia poses a wide range of new challenges in addition to the pure academic craft of research. The resources and resilience one needs in order to shape an academic career path often appear ambiguous, particularly in the innately transformative field of Internet research. In response to feedback from AoIR members, this full-day workshop seeks to address this concern by offering a supportive and constructive environment to discuss career development related issues that are of specific interest to research students and early career academics.
Resumo:
Background: In health related research, it is critical not only to demonstrate the efficacy of intervention, but to show that this is not due to chance or confounding variables. Content: Single case experimental design is a useful quasi-experimental design and method used to achieve these goals when there are limited participants and funds for research. This type of design has various advantages compared to group experimental designs. One such advantage is the capacity to focus on individual performance outcomes compared to group performance outcomes. Conclusions: This comprehensive review demonstrates the benefits and limitations of using single case experimental design, its various design methods, and data collection and analysis for research purposes.
Resumo:
Social moderation involves teachers gathering together to discuss their judgements of the quality of student work and to reach agreement regarding the standard awarded. This qualitative study conducted over a three-year period investigated the social practice of moderation and the influence on teachers’ judgements of students work. An initial survey of teachers’ understandings of moderation and standards, pre-interviews of teachers who participated in the moderation meetings, observations of these meetings with a particular focus on one teacher (focus teachers) comprised the data collection methods. Data analysis involved organising, matching, coding, identifying patterns and themes using a constant comparative method. Socio-cultural theories of learning and assessment underpinned the approach to data analysis and proved helpful in explaining the diverse influences on teachers’ judgements beyond the task criteria, and the progressive development of shared understandings through engaging in professional discussions of students’ work. The study revealed that the process is not clear and linear and is influenced by factors such as the representation of the standards and the knowledge base of the teachers.
Resumo:
This chapter contains sections titled: Introduction Case study: Estimating transmission rates of nosocomial pathogens Models and methods Data analysis and results Discussion References
Resumo:
Aims: To describe a local data linkage project to match hospital data with the Australian Institute of Health and Welfare (AIHW) National Death Index (NDI) to assess longterm outcomes of intensive care unit patients. Methods: Data were obtained from hospital intensive care and cardiac surgery databases on all patients aged 18 years and over admitted to either of two intensive care units at a tertiary-referral hospital between 1 January 1994 and 31 December 2005. Date of death was obtained from the AIHW NDI by probabilistic software matching, in addition to manual checking through hospital databases and other sources. Survival was calculated from time of ICU admission, with a censoring date of 14 February 2007. Data for patients with multiple hospital admissions requiring intensive care were analysed only from the first admission. Summary and descriptive statistics were used for preliminary data analysis. Kaplan-Meier survival analysis was used to analyse factors determining long-term survival. Results: During the study period, 21 415 unique patients had 22 552 hospital admissions that included an ICU admission; 19 058 surgical procedures were performed with a total of 20 092 ICU admissions. There were 4936 deaths. Median follow-up was 6.2 years, totalling 134 203 patient years. The casemix was predominantly cardiac surgery (80%), followed by cardiac medical (6%), and other medical (4%). The unadjusted survival at 1, 5 and 10 years was 97%, 84% and 70%, respectively. The 1-year survival ranged from 97% for cardiac surgery to 36% for cardiac arrest. An APACHE II score was available for 16 877 patients. In those discharged alive from hospital, the 1, 5 and 10-year survival varied with discharge location. Conclusions: ICU-based linkage projects are feasible to determine long-term outcomes of ICU patients
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Objective: To describe unintentional injuries to children aged less than one year, using coded and textual information, in three-month age bands to reflect their development over the year. Methods: Data from the Queensland Injury Surveillance Unit was used. The Unit collects demographic, clinical and circumstantial details about injured persons presenting to selected emergency departments across the State. Only injuries coded as unintentional in children admitted to hospital were included for this analysis. Results: After editing, 1,082 children remained for analysis, 24 with transport-related injuries. Falls were the most common injury, but becoming proportionately less over the year, whereas burns and scalds and foreign body injuries increased. The proportion of injuries due to contact with persons or objects varied little, but poisonings were relatively more common in the first and fourth three-month periods. Descriptions indicated that family members were somehow causally involved in 16% of injuries. Our findings are in qualitative agreement with comparable previous studies. Conclusion: The pattern of injuries varies over the first year of life and is clearly linked to the child's increasing mobility. Implications: Injury patterns in the first year of life should be reported over shorter intervals. Preventive measures for young children need to be designed with their rapidly changing developmental stage in mind, using a variety of strategies, one of which could be opportunistic developmentally specific education of parents. Injuries in young children are of abiding concern given their immediate health and emotional effects, and potential for long-term adverse sequelae. In Australia, in the financial year 2006/07, 2,869 children less than 12 months of age were admitted to hospital for an unintentional injury, a rate of 10.6 per 1,000, representing a considerable economic and social burden. Given that many of these injuries are preventable, this is particularly concerning. Most epidemiologic studies analyse data in five-year age bands, so children less than five years of age are examined as a group. This study includes only those children younger than one year of age to identify injury detail lost in analyses of the larger group, as we hypothesised that the injury pattern varied with the developmental stage of the child. The authors of several North American studies have commented that in dealing with injuries in pre-school children, broad age groupings are inadequate to do justice to the rapid developmental changes in infancy and early childhood, and have in consequence analysed injuries in shorter intervals. To our knowledge, no similar analysis of Australian infant injuries has been published to date. This paper describes injury in children less than 12 months of age using data from the Queensland Injury Surveillance Unit (QISU).
Resumo:
BACKGROUND & AIMS Metabolomics is comprehensive analysis of low-molecular-weight endogenous metabolites in a biological sample. It could enable mapping of perturbations of early biochemical changes in diseases and hence provide an opportunity to develop predictive biomarkers that could provide valuable insights into the mechanisms of diseases. The aim of this study was to elucidate the changes in endogenous metabolites and to phenotype the metabolic profiling of d-galactosamine (GalN)-inducing acute hepatitis in rats by UPLC-ESI MS. METHODS The systemic biochemical actions of GalN administration (ip, 400 mg/kg) have been investigated in male wistar rats using conventional clinical chemistry, liver histopathology and metabolomic analysis of UPLC- ESI MS of urine. The urine was collected predose (-24 to 0 h) and 0-24, 24-48, 48-72, 72-96 h post-dose. Mass spectrometry of the urine was analysed visually and via conjunction with multivariate data analysis. RESULTS Results demonstrated that there was a time-dependent biochemical effect of GalN dosed on the levels of a range of low-molecular-weight metabolites in urine, which was correlated with developing phase of the GalN-inducing acute hepatitis. Urinary excretion of beta-hydroxybutanoic acid and citric acid was decreased following GalN dosing, whereas that of glycocholic acid, indole-3-acetic acid, sphinganine, n-acetyl-l-phenylalanine, cholic acid and creatinine excretion was increased, which suggests that several key metabolic pathways such as energy metabolism, lipid metabolism and amino acid metabolism were perturbed by GalN. CONCLUSION This metabolomic investigation demonstrates that this robust non-invasive tool offers insight into the metabolic states of diseases.
Resumo:
This paper uses innovative content analysis techniques to map how the death of Oscar Pistorius' girlfriend, Reeva Steenkamp, was framed on Twitter conversations. Around 1.5 million posts from a two-week timeframe are analyzed with a combination of syntactic and semantic methods. This analysis is grounded in the frame analysis perspective and is different than sentiment analysis. Instead of looking for explicit evaluations, such as “he is guilty” or “he is innocent”, we showcase through the results how opinions can be identified by complex articulations of more implicit symbolic devices such as examples and metaphors repeatedly mentioned. Different frames are adopted by users as more information about the case is revealed: from a more episodic one, highly used in the very beginning, to more systemic approaches, highlighting the association of the event with urban violence, gun control issues, and violence against women. A detailed timeline of the discussions is provided.
Resumo:
This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.
Resumo:
A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
Qualitative research methods require transparency to ensure the ‘trustworthiness’ of the data analysis. The intricate processes of organizing, coding and analyzing the data are often rendered invisible in the presentation of the research findings, which requires a ‘leap of faith’ for the reader. Computer assisted data analysis software can be used to make the research process more transparent, without sacrificing rich, interpretive analysis by the researcher. This article describes in detail how one software package was used in a poststructural study to link and code multiple forms of data to four research questions for fine-grained analysis. This description will be useful for researchers seeking to use qualitative data analysis software as an analytic tool.