882 resultados para methods: data analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the largest catalogue to date of optical counterparts for H I radio-selected galaxies, HOPCAT. Of the 4315 H I radio-detected sources from the H I Parkes All Sky Survey (HIPASS) catalogue, we find optical counterparts for 3618 (84 per cent) galaxies. Of these, 1798 (42 per cent) have confirmed optical velocities and 848 (20 per cent) are single matches without confirmed velocities. Some galaxy matches are members of galaxy groups. From these multiple galaxy matches, 714 (16 per cent) have confirmed optical velocities and a further 258 (6 per cent) galaxies are without confirmed velocities. For 481 (11 per cent), multiple galaxies are present but no single optical counterpart can be chosen and 216 (5 per cent) have no obvious optical galaxy present. Most of these 'blank fields' are in crowded fields along the Galactic plane or have high extinctions. Isolated 'dark galaxy' candidates are investigated using an extinction cut of A(Bj) < 1 mag and the blank-fields category. Of the 3692 galaxies with an A(Bj) extinction < 1 mag, only 13 are also blank fields. Of these, 12 are eliminated either with follow-up Parkes observations or are in crowded fields. The remaining one has a low surface brightness optical counterpart. Hence, no isolated optically dark galaxies have been found within the limits of the HIPASS survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an application of Mathematical Morphology (MM) for the classification of astronomical objects, both for star/galaxy differentiation and galaxy morphology classification. We demonstrate that, for CCD images, 99.3 +/- 3.8% of galaxies can be separated from stars using MM, with 19.4 +/- 7.9% of the stars being misclassified. We demonstrate that, for photographic plate images, the number of galaxies correctly separated from the stars can be increased using our MM diffraction spike tool, which allows 51.0 +/- 6.0% of the high-brightness galaxies that are inseparable in current techniques to be correctly classified, with only 1.4 +/- 0.5% of the high-brightness stars contaminating the population. We demonstrate that elliptical (E) and late-type spiral (Sc-Sd) galaxies can be classified using MM with an accuracy of 91.4 +/- 7.8%. It is a method involving fewer 'free parameters' than current techniques, especially automated machine learning algorithms. The limitation of MM galaxy morphology classification based on seeing and distance is also presented. We examine various star/galaxy differentiation and galaxy morphology classification techniques commonly used today, and show that our MM techniques compare very favourably.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: Several studies suggest that the activity level of a planet-host star can be influenced by the presence of a close-by orbiting planet. Moreover, the interaction mechanisms that have been proposed, magnetic interaction and tidal interaction, exhibit a very different dependence on the orbital separation between the star and the planet. A detection of activity enhancement and characterization of its dependence on planetary orbital distance can, in principle, allow us to characterize the physical mechanism behind the activity enhancement. Methods: We used the HARPS-N spectrograph to measure the stellar activity level of HD 80606 during the planetary periastron passage and compared the activity measured to that close to apastron. Being characterized by an eccentricity of 0.93 and an orbital period of 111 days, the system's extreme variation in orbital separation makes it a perfect target to test our hypothesis. Results: We find no evidence for a variation in the activity level of the star as a function of planetary orbital distance, as measured by all activity indicators employed: log(R'HK), Hα, NaI, and HeI. None of the models employed, whether magnetic interaction or tidal interaction, provides a good description of the data. The photometry revealed no variation either, but it was strongly affected by poor weather conditions. Conclusions: We find no evidence for star-planet interaction in HD 80606 at the moment of the periastron passage of its very eccentric planet. The straightforward explanation for the non-detection is the absence of interaction as a result of a low magnetic field strength on either the planet or the star and of the low level of tidal interaction between the two. However, we cannot exclude two scenarios: i) the interaction can be instantaneous and of magnetic origin, being concentrated on the substellar point and its surrounding area; and ii) the interaction can lead to a delayed activity enhancement. In either scenario, a star-planet interaction would not be detectable with the dataset described in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: To describe a local data linkage project to match hospital data with the Australian Institute of Health and Welfare (AIHW) National Death Index (NDI) to assess longterm outcomes of intensive care unit patients. Methods: Data were obtained from hospital intensive care and cardiac surgery databases on all patients aged 18 years and over admitted to either of two intensive care units at a tertiary-referral hospital between 1 January 1994 and 31 December 2005. Date of death was obtained from the AIHW NDI by probabilistic software matching, in addition to manual checking through hospital databases and other sources. Survival was calculated from time of ICU admission, with a censoring date of 14 February 2007. Data for patients with multiple hospital admissions requiring intensive care were analysed only from the first admission. Summary and descriptive statistics were used for preliminary data analysis. Kaplan-Meier survival analysis was used to analyse factors determining long-term survival. Results: During the study period, 21 415 unique patients had 22 552 hospital admissions that included an ICU admission; 19 058 surgical procedures were performed with a total of 20 092 ICU admissions. There were 4936 deaths. Median follow-up was 6.2 years, totalling 134 203 patient years. The casemix was predominantly cardiac surgery (80%), followed by cardiac medical (6%), and other medical (4%). The unadjusted survival at 1, 5 and 10 years was 97%, 84% and 70%, respectively. The 1-year survival ranged from 97% for cardiac surgery to 36% for cardiac arrest. An APACHE II score was available for 16 877 patients. In those discharged alive from hospital, the 1, 5 and 10-year survival varied with discharge location. Conclusions: ICU-based linkage projects are feasible to determine long-term outcomes of ICU patients

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To describe unintentional injuries to children aged less than one year, using coded and textual information, in three-month age bands to reflect their development over the year. Methods: Data from the Queensland Injury Surveillance Unit was used. The Unit collects demographic, clinical and circumstantial details about injured persons presenting to selected emergency departments across the State. Only injuries coded as unintentional in children admitted to hospital were included for this analysis. Results: After editing, 1,082 children remained for analysis, 24 with transport-related injuries. Falls were the most common injury, but becoming proportionately less over the year, whereas burns and scalds and foreign body injuries increased. The proportion of injuries due to contact with persons or objects varied little, but poisonings were relatively more common in the first and fourth three-month periods. Descriptions indicated that family members were somehow causally involved in 16% of injuries. Our findings are in qualitative agreement with comparable previous studies. Conclusion: The pattern of injuries varies over the first year of life and is clearly linked to the child's increasing mobility. Implications: Injury patterns in the first year of life should be reported over shorter intervals. Preventive measures for young children need to be designed with their rapidly changing developmental stage in mind, using a variety of strategies, one of which could be opportunistic developmentally specific education of parents. Injuries in young children are of abiding concern given their immediate health and emotional effects, and potential for long-term adverse sequelae. In Australia, in the financial year 2006/07, 2,869 children less than 12 months of age were admitted to hospital for an unintentional injury, a rate of 10.6 per 1,000, representing a considerable economic and social burden. Given that many of these injuries are preventable, this is particularly concerning. Most epidemiologic studies analyse data in five-year age bands, so children less than five years of age are examined as a group. This study includes only those children younger than one year of age to identify injury detail lost in analyses of the larger group, as we hypothesised that the injury pattern varied with the developmental stage of the child. The authors of several North American studies have commented that in dealing with injuries in pre-school children, broad age groupings are inadequate to do justice to the rapid developmental changes in infancy and early childhood, and have in consequence analysed injuries in shorter intervals. To our knowledge, no similar analysis of Australian infant injuries has been published to date. This paper describes injury in children less than 12 months of age using data from the Queensland Injury Surveillance Unit (QISU).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND & AIMS Metabolomics is comprehensive analysis of low-molecular-weight endogenous metabolites in a biological sample. It could enable mapping of perturbations of early biochemical changes in diseases and hence provide an opportunity to develop predictive biomarkers that could provide valuable insights into the mechanisms of diseases. The aim of this study was to elucidate the changes in endogenous metabolites and to phenotype the metabolic profiling of d-galactosamine (GalN)-inducing acute hepatitis in rats by UPLC-ESI MS. METHODS The systemic biochemical actions of GalN administration (ip, 400 mg/kg) have been investigated in male wistar rats using conventional clinical chemistry, liver histopathology and metabolomic analysis of UPLC- ESI MS of urine. The urine was collected predose (-24 to 0 h) and 0-24, 24-48, 48-72, 72-96 h post-dose. Mass spectrometry of the urine was analysed visually and via conjunction with multivariate data analysis. RESULTS Results demonstrated that there was a time-dependent biochemical effect of GalN dosed on the levels of a range of low-molecular-weight metabolites in urine, which was correlated with developing phase of the GalN-inducing acute hepatitis. Urinary excretion of beta-hydroxybutanoic acid and citric acid was decreased following GalN dosing, whereas that of glycocholic acid, indole-3-acetic acid, sphinganine, n-acetyl-l-phenylalanine, cholic acid and creatinine excretion was increased, which suggests that several key metabolic pathways such as energy metabolism, lipid metabolism and amino acid metabolism were perturbed by GalN. CONCLUSION This metabolomic investigation demonstrates that this robust non-invasive tool offers insight into the metabolic states of diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses innovative content analysis techniques to map how the death of Oscar Pistorius' girlfriend, Reeva Steenkamp, was framed on Twitter conversations. Around 1.5 million posts from a two-week timeframe are analyzed with a combination of syntactic and semantic methods. This analysis is grounded in the frame analysis perspective and is different than sentiment analysis. Instead of looking for explicit evaluations, such as “he is guilty” or “he is innocent”, we showcase through the results how opinions can be identified by complex articulations of more implicit symbolic devices such as examples and metaphors repeatedly mentioned. Different frames are adopted by users as more information about the case is revealed: from a more episodic one, highly used in the very beginning, to more systemic approaches, highlighting the association of the event with urban violence, gun control issues, and violence against women. A detailed timeline of the discussions is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.