934 resultados para methods: data analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, much attention has been given to the mass spectrometry (MS) technology based disease classification, diagnosis, and protein-based biomarker identification. Similar to microarray based investigation, proteomic data generated by such kind of high-throughput experiments are often with high feature-to-sample ratio. Moreover, biological information and pattern are compounded with data noise, redundancy and outliers. Thus, the development of algorithms and procedures for the analysis and interpretation of such kind of data is of paramount importance. In this paper, we propose a hybrid system for analyzing such high dimensional data. The proposed method uses the k-mean clustering algorithm based feature extraction and selection procedure to bridge the filter selection and wrapper selection methods. The potential informative mass/charge (m/z) markers selected by filters are subject to the k-mean clustering algorithm for correlation and redundancy reduction, and a multi-objective Genetic Algorithm selector is then employed to identify discriminative m/z markers generated by k-mean clustering algorithm. Experimental results obtained by using the proposed method indicate that it is suitable for m/z biomarker selection and MS based sample classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airport baggage handling systems are a critical infrastructure component within major airports, and essential to ensure smooth luggage transfer while preventing dangerous material being loaded onto aircraft. This paper proposes a standard set of measures to assess the expected performance of a baggage handling system through discrete event simulation. These evaluation methods also have application in the study of general network systems. Results from the application of these methods reveal operational characteristics of the studied BHS, in terms of metrics such as peak throughput, in-system time and system recovery time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The relative contributions of cannabis and alcohol use to educational outcomes are unclear. We examined the extent to which adolescent cannabis or alcohol use predicts educational attainment in emerging adulthood. METHODS: Participant-level data were integrated from three longitudinal studies from Australia and New Zealand (Australian Temperament Project, Christchurch Health and Development Study, and Victorian Adolescent Health Cohort Study). The number of participants varied by analysis (N=2179-3678) and were assessed on multiple occasions between ages 13 and 25. We described the association between frequency of cannabis or alcohol use prior to age 17 and high school non-completion, university non-enrolment, and degree non-attainment by age 25. Two other measures of alcohol use in adolescence were also examined. RESULTS: After covariate adjustment using a propensity score approach, adolescent cannabis use (weekly+) was associated with 1½ to two-fold increases in the odds of high school non-completion (OR=1.60, 95% CI=1.09-2.35), university non-enrolment (OR=1.51, 95% CI=1.06-2.13), and degree non-attainment (OR=1.96, 95% CI=1.36-2.81). In contrast, adjusted associations for all measures of adolescent alcohol use were inconsistent and weaker. Attributable risk estimates indicated adolescent cannabis use accounted for a greater proportion of the overall rate of non-progression with formal education than adolescent alcohol use. CONCLUSIONS: Findings are important to the debate about the relative harms of cannabis and alcohol use. Adolescent cannabis use is a better marker of lower educational attainment than adolescent alcohol use and identifies an important target population for preventive intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Housing is an important component of wealth for a typical household in many countries. The objective of this paper is to investigate the effect of real-estate price variation on welfare, trying to close a gap between the welfare literature in Brazil and that in the U.S., the U.K., and other developed countries. Our first motivation relates to the fact that real estate is probably more important here than elsewhere as a proportion of wealth, which potentially makes the impact of a price change bigger here. Our second motivation relates to the fact that real-estate prices boomed in Brazil in the last five years. Prime real estate in Rio de Janeiro and São Paulo have tripled in value in that period, and a smaller but generalized increase has been observed throughout the country. Third, we have also seen a recent consumption boom in Brazil in the last five years. Indeed, the recent rise of some of the poor to middle-income status is well documented not only for Brazil but for other emerging countries as well. Regarding consumption and real-estate prices in Brazil, one cannot imply causality from correlation, but one can do causal inference with an appropriate structural model and proper inference, or with a proper inference in a reduced-form setup. Our last motivation is related to the complete absence of studies of this kind in Brazil, which makes ours a pioneering study. We assemble a panel-data set for the determinants of non-durable consumption growth by Brazilian states, merging the techniques and ideas in Campbell and Cocco (2007) and in Case, Quigley and Shiller (2005). With appropriate controls, and panel-data methods, we investigate whether house-price variation has a positive effect on non-durable consumption. The results show a non-negligible significant impact of the change in the price of real estate on welfare consumption), although smaller then what Campbell and Cocco have found. Our findings support the view that the channel through which house prices affect consumption is a financial one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study introduces a multi-agent architecture designed for doing automation process of data integration and intelligent data analysis. Different from other approaches the multi-agent architecture was designed using a multi-agent based methodology. Tropos, an agent based methodology was used for design. Based on the proposed architecture, we describe a Web based application where the agents are responsible to analyse petroleum well drilling data to identify possible abnormalities occurrence. The intelligent data analysis methods used was the Neural Network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Infant mortality is an important measure of human development, related to the level of welfare of a society. In order to inform public policy, various studies have tried to identify the factors that influence, at an aggregated level, infant mortality. The objective of this paper is to analyze the regional pattern of infant mortality in Brazil, evaluating the effect of infrastructure, socio-economic, and demographic variables to understand its distribution across the country. Methods: Regressions including socio-economic and living conditions variables are conducted in a structure of panel data. More specifically, a spatial panel data model with fixed effects and a spatial error autocorrelation structure is used to help to solve spatial dependence problems. The use of a spatial modeling approach takes into account the potential presence of spillovers between neighboring spatial units. The spatial units considered are Minimum Comparable Areas, defined to provide a consistent definition across Census years. Data are drawn from the 1980, 1991 and 2000 Census of Brazil, and from data collected by the Ministry of Health (DATASUS). In order to identify the influence of health care infrastructure, variables related to the number of public and private hospitals are included. Results: The results indicate that the panel model with spatial effects provides the best fit to the data. The analysis confirms that the provision of health care infrastructure and social policy measures (e. g. improving education attainment) are linked to reduced rates of infant mortality. An original finding concerns the role of spatial effects in the analysis of IMR. Spillover effects associated with health infrastructure and water and sanitation facilities imply that there are regional benefits beyond the unit of analysis. Conclusions: A spatial modeling approach is important to produce reliable estimates in the analysis of panel IMR data. Substantively, this paper contributes to our understanding of the physical and social factors that influence IMR in the case of a developing country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The autoregressive (AR) estimator, a non-parametric method, is used to analyze functional magnetic resonance imaging (fMRI) data. The same method has been used, with success, in several other time series data analysis. It uses exclusively the available experimental data points to estimate the most plausible power spectra compatible with the experimental data and there is no need to make any assumption about non-measured points. The time series, obtained from fMRI block paradigm data, is analyzed by the AR method to determine the brain active regions involved in the processing of a given stimulus. This method is considerably more reliable than the fast Fourier transform or the parametric methods. The time series corresponding to each image pixel is analyzed using the AR estimator and the corresponding poles are obtained. The pole distribution gives the shape of power spectra, and the pixels with poles at the stimulation frequency are considered as the active regions. The method was applied in simulated and real data, its superiority is shown by the receiver operating characteristic curves which were obtained using the simulated data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supernovae are among the most energetic events occurring in the universe and are so far the only verified extrasolar source of neutrinos. As the explosion mechanism is still not well understood, recording a burst of neutrinos from such a stellar explosion would be an important benchmark for particle physics as well as for the core collapse models. The neutrino telescope IceCube is located at the Geographic South Pole and monitors the antarctic glacier for Cherenkov photons. Even though it was conceived for the detection of high energy neutrinos, it is capable of identifying a burst of low energy neutrinos ejected from a supernova in the Milky Way by exploiting the low photomultiplier noise in the antarctic ice and extracting a collective rate increase. A signal Monte Carlo specifically developed for water Cherenkov telescopes is presented. With its help, we will investigate how well IceCube can distinguish between core collapse models and oscillation scenarios. In the second part, nine years of data taken with the IceCube precursor AMANDA will be analyzed. Intensive data cleaning methods will be presented along with a background simulation. From the result, an upper limit on the expected occurrence of supernovae within the Milky Way will be determined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in the area of mobile and wireless communication for healthcare (m-Health) along with the improvements in information science allow the design and development of new patient-centric models for the provision of personalised healthcare services, increase of patient independence and improvement of patient's self-control and self-management capabilities. This paper comprises a brief overview of the m-Health applications towards the self-management of individuals with diabetes mellitus and the enhancement of their quality of life. Furthermore, the design and development of a mobile phone application for Type 1 Diabetes Mellitus (T1DM) self-management is presented. The technical evaluation of the application, which permits the management of blood glucose measurements, blood pressure measurements, insulin dosage, food/drink intake and physical activity, has shown that the use of the mobile phone technologies along with data analysis methods might improve the self-management of T1DM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background In Switzerland there are about 150,000 equestrians. Horse related injuries, including head and spinal injuries, are frequently treated at our level I trauma centre. Objectives To analyse injury patterns, protective factors, and risk factors related to horse riding, and to define groups of safer riders and those at greater risk Methods We present a retrospective and a case-control survey at conducted a tertiary trauma centre in Bern, Switzerland. Injured equestrians from July 2000 - June 2006 were retrospectively classified by injury pattern and neurological symptoms. Injured equestrians from July-December 2008 were prospectively collected using a questionnaire with 17 variables. The same questionnaire was applied in non-injured controls. Multiple logistic regression was performed, and combined risk factors were calculated using inference trees. Results Retrospective survey A total of 528 injuries occured in 365 patients. The injury pattern revealed as follows: extremities (32%: upper 17%, lower 15%), head (24%), spine (14%), thorax (9%), face (9%), pelvis (7%) and abdomen (2%). Two injuries were fatal. One case resulted in quadriplegia, one in paraplegia. Case-control survey 61 patients and 102 controls (patients: 72% female, 28% male; controls: 63% female, 37% male) were included. Falls were most frequent (65%), followed by horse kicks (19%) and horse bites (2%). Variables statistically significant for the controls were: Older age (p = 0.015), male gender (p = 0.04) and holding a diploma in horse riding (p = 0.004). Inference trees revealed typical groups less and more likely to suffer injury. Conclusions Experience with riding and having passed a diploma in horse riding seem to be protective factors. Educational levels and injury risk should be graded within an educational level-injury risk index.