20 resultados para Data-driven Methods
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Wavelet correlation between subjects: A time-scale data driven analysis for brain mapping using fMRI
Resumo:
Functional magnetic resonance imaging (fMRI) based on BOLD signal has been used to indirectly measure the local neural activity induced by cognitive tasks or stimulation. Most fMRI data analysis is carried out using the general linear model (GLM), a statistical approach which predicts the changes in the observed BOLD response based on an expected hemodynamic response function (HRF). In cases when the task is cognitively complex or in cases of diseases, variations in shape and/or delay may reduce the reliability of results. A novel exploratory method using fMRI data, which attempts to discriminate between neurophysiological signals induced by the stimulation protocol from artifacts or other confounding factors, is introduced in this paper. This new method is based on the fusion between correlation analysis and the discrete wavelet transform, to identify similarities in the time course of the BOLD signal in a group of volunteers. We illustrate the usefulness of this approach by analyzing fMRI data from normal subjects presented with standardized human face pictures expressing different degrees of sadness. The results show that the proposed wavelet correlation analysis has greater statistical power than conventional GLM or time domain intersubject correlation analysis. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.
Resumo:
In this paper a new parametric method to deal with discrepant experimental results is developed. The method is based on the fit of a probability density function to the data. This paper also compares the characteristics of different methods used to deduce recommended values and uncertainties from a discrepant set of experimental data. The methods are applied to the (137)Cs and (90)Sr published half-lives and special emphasis is given to the deduced confidence intervals. The obtained results are analyzed considering two fundamental properties expected from an experimental result: the probability content of confidence intervals and the statistical consistency between different recommended values. The recommended values and uncertainties for the (137)Cs and (90)Sr half-lives are 10,984 (24) days and 10,523 (70) days, respectively. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Aims. We derive lists of proper-motions and kinematic membership probabilities for 49 open clusters and possible open clusters in the zone of the Bordeaux PM2000 proper motion catalogue (+ 11 degrees <= delta <= + 18 degrees). We test different parametrisations of the proper motion and position distribution functions and select the most successful one. In the light of those results, we analyse some objects individually. Methods. We differenciate between cluster and field member stars, and assign membership probabilities, by applying a new and fully automated method based on both parametrisations of the proper motion and position distribution functions, and genetic algorithm optimization heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. Results. We present a catalogue comprising kinematic parameters and associated membership probability lists for 49 open clusters and possible open clusters in the Bordeaux PM2000 catalogue region. We note that this is the first determination of proper motions for five open clusters. We confirm the non-existence of two kinematic populations in the region of 15 previously suspected non-existent objects.
Resumo:
Optical monitoring systems are necessary to manufacture multilayer thin-film optical filters with low tolerance on spectrum specification. Furthermore, to have better accuracy on the measurement of film thickness, direct monitoring is a must. Direct monitoring implies acquiring spectrum data from the optical component undergoing the film deposition itself, in real time. In making film depositions on surfaces of optical components, the high vacuum evaporator chamber is the most popular equipment. Inside the evaporator, at the top of the chamber, there is a metallic support with several holes where the optical components are assembled. This metallic support has rotary motion to promote film homogenization. To acquire a measurement of the spectrum of the film in deposition, it is necessary to pass a light beam through a glass witness undergoing the film deposition process, and collect a sample of the light beam using a spectrometer. As both the light beam and the light collector are stationary, a synchronization system is required to identify the moment at which the optical component passes through the light beam.
Resumo:
Eight different models to represent the effect of friction in control valves are presented: four models based on physical principles and four empirical ones. The physical models, both static and dynamic, have the same structure. The models are implemented in Simulink/Matlab (R) and compared, using different friction coefficients and input signals. Three of the models were able to reproduce the stick-slip phenomenon and passed all the tests, which were applied following ISA standards. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Aim: To look at the characteristics of Postgraduate Hospital Educational Environment Measure (PHEEM) using data from the UK, Brazil, Chile and the Netherlands, and to examine the reliability and characteristics of PHEEM, especially how the three PHEEM subscales fitted with factors derived statistically from the data sets. Methods: Statistical analysis of PHEEM scores from 1563 sets of data, using reliability analysis, exploratory factor analysis and correlations of factors derived with the three defined PHEEM subscales. Results: PHEEM was very reliable with an overall Cronbach`s alpha of 0.928. Three factors were derived by exploratory factor analysis. Factor One correlated most strongly with the teaching subscale (R=0.802), Factor Two correlated most strongly with the role autonomy subscale (R=0.623) and Factor Three correlated most strongly with the social support subscale (R=0.538). Conclusions: PHEEM is a multi-dimensional instrument. Overall, it is very reliable. There is a good fit of the three defined subscales, derived by qualitative methods, with the three principal factors derived from the data by exploratory factor analysis.
Resumo:
Simultaneous acquisition of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) aims to disentangle the description of brain processes by exploiting the advantages of each technique. Most studies in this field focus on exploring the relationships between fMRI signals and the power spectrum at some specific frequency bands (alpha, beta, etc.). On the other hand, brain mapping of EEG signals (e.g., interictal spikes in epileptic patients) usually assumes an haemodynamic response function for a parametric analysis applying the GLM, as a rough approximation. The integration of the information provided by the high spatial resolution of MR images and the high temporal resolution of EEG may be improved by referencing them by transfer functions, which allows the identification of neural driven areas without strong assumptions about haemodynamic response shapes or brain haemodynamic`s homogeneity. The difference on sampling rate is the first obstacle for a full integration of EEG and fMRI information. Moreover, a parametric specification of a function representing the commonalities of both signals is not established. In this study, we introduce a new data-driven method for estimating the transfer function from EEG signal to fMRI signal at EEG sampling rate. This approach avoids EEG subsampling to fMRI time resolution and naturally provides a test for EEG predictive power over BOLD signal fluctuations, in a well-established statistical framework. We illustrate this concept in resting state (eyes closed) and visual simultaneous fMRI-EEG experiments. The results point out that it is possible to predict the BOLD fluctuations in occipital cortex by using EEG measurements. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Resting state functional magnetic resonance imaging (fMRI) reveals a distinct network of correlated brain function representing a default mode state of the human brain The underlying structural basis of this functional connectivity pattern is still widely unexplored We combined fractional anisotropy measures of fiber tract integrity derived from diffusion tensor imaging (DTI) and resting state fMRI data obtained at 3 Tesla from 20 healthy elderly subjects (56 to 83 years of age) to determine white matter microstructure e 7 underlying default mode connectivity We hypothesized that the functional connectivity between the posterior cingulate and hippocampus from resting state fMRI data Would be associated with the white matter microstructure in the cingulate bundle and fiber tracts connecting posterior cingulate gyrus With lateral temporal lobes, medial temporal lobes, and precuneus This was demonstrated at the p<0001 level using a voxel-based multivariate analysis of covariance (MANCOVA) approach In addition, we used a data-driven technique of joint independent component analysis (ICA) that uncovers spatial pattern that are linked across modalities. It revealed a pattern of white matter tracts including cingulate bundle and associated fiber tracts resembling the findings from the hypothesis-driven analysis and was linked to the pattern of default mode network (DMN) connectivity in the resting state fMRI data Out findings support the notion that the functional connectivity between the posterior cingulate and hippocampus and the functional connectivity across the entire DMN is based oil distinct pattern of anatomical connectivity within the cerebral white matter (C) 2009 Elsevier Inc All rights reserved
Resumo:
Objective: Although suicide is a leading cause of death worldwide, clinicians and researchers lack a data-driven method to assess the risk of suicide attempts. This study reports the results of an analysis of a large cross-national epidemiologic survey database that estimates the 12-month prevalence of suicidal behaviors, identifies risk factors for suicide attempts, and combines these factors to create a risk index for 12-month suicide attempts separately for developed and developing countries. Method: Data come from the World Health Organization (WHO) World Mental Health (WMH) Surveys (conducted 2001-2007), in which 108,705 adults from 21 countries were interviewed using the WHO Composite International Diagnostic Interview. The survey assessed suicidal behaviors and potential risk factors across multiple domains, including socio-demographic characteristics, parent psychopathology, childhood adversities, DSM-IV disorders, and history of suicidal behavior. Results: Twelve-month prevalence estimates of suicide ideation, plans, and attempts are 2.0%, 0.6%, and 0.3%, respectively, for developed countries and 2.1%, 0.7%, and 0.4%, respectively, for developing countries. Risk factors for suicidal behaviors in both developed and developing countries include female sex, younger age, lower education and income, unmarried status, unemployment, parent psychopathology, childhood adversities, and presence of diverse 12-month DSM-IV mental disorders. Combining risk factors from multiple domains produced risk indices that accurately predicted 12-month suicide attempts in both developed and developing countries (area under the receiver operating characteristic curve = 0.74-0.80). Conclusions: Suicidal behaviors occur at similar rates in both developed and developing countries. Risk indices assessing multiple domains can predict suicide attempts with fairly good accuracy and may be useful in aiding clinicians in the prediction of these behaviors. J Clin Psychiatry 2010;71(12):1617-1628 (C) Copyright 2010 Physicians Postgraduate Press, Inc.
Resumo:
The dynamical processes that lead to open cluster disruption cause its mass to decrease. To investigate such processes from the observational point of view, it is important to identify open cluster remnants (OCRs), which are intrinsically poorly populated. Due to their nature, distinguishing them from field star fluctuations is still an unresolved issue. In this work, we developed a statistical diagnostic tool to distinguish poorly populated star concentrations from background field fluctuations. We use 2MASS photometry to explore one of the conditions required for a stellar group to be a physical group: to produce distinct sequences in a colour-magnitude diagram (CMD). We use automated tools to (i) derive the limiting radius; (ii) decontaminate the field and assign membership probabilities; (iii) fit isochrones; and (iv) compare object and field CMDs, considering the isochrone solution, in order to verify the similarity. If the object cannot be statistically considered as a field fluctuation, we derive its probable age, distance modulus, reddening and uncertainties in a self-consistent way. As a test, we apply the tool to open clusters and comparison fields. Finally, we study the OCR candidates DoDz 6, NGC 272, ESO 435 SC48 and ESO 325 SC15. The tool is optimized to treat these low-statistic objects and to separate the best OCR candidates for studies on kinematics and chemical composition. The study of the possible OCRs will certainly provide a deep understanding of OCR properties and constraints for theoretical models, including insights into the evolution of open clusters and dissolution rates.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Background: A relative friability to capture a sufficiently large patient population in any one geographic location has traditionally limited research into rare diseases. Methods and Results: Clinicians interested in the rare disease lymphangioleiomyomatosis (LAM) have worked with the LAM Treatment Alliance, the MIT Media Lab, and Clozure Associates to cooperate in the design of a state-of-the-art data coordination platform that can be used for clinical trials and other research focused on the global LAM patient population. This platform is a component of a set of web-based resources, including a patient self-report data portal, aimed at accelerating research in rare diseases in a rigorous fashion. Conclusions: Collaboration between clinicians, researchers, advocacy groups, and patients can create essential community resource infrastructure to accelerate rare disease research. The International LAM Registry is an example of such an effort.
Resumo:
Context. B[e] supergiants are luminous, massive post-main sequence stars exhibiting non-spherical winds, forbidden lines, and hot dust in a disc-like structure. The physical properties of their rich and complex circumstellar environment (CSE) are not well understood, partly because these CSE cannot be easily resolved at the large distances found for B[e] supergiants (typically greater than or similar to 1 kpc). Aims. From mid-IR spectro-interferometric observations obtained with VLTI/MIDI we seek to resolve and study the CSE of the Galactic B[e] supergiant CPD-57 degrees 2874. Methods. For a physical interpretation of the observables (visibilities and spectrum) we use our ray-tracing radiative transfer code (FRACS), which is optimised for thermal spectro-interferometric observations. Results. Thanks to the short computing time required by FRACS (<10 s per monochromatic model), best-fit parameters and uncertainties for several physical quantities of CPD-57 degrees 2874 were obtained, such as inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination. Conclusions. The analysis of VLTI/MIDI data with FRACS allowed one of the first direct determinations of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a full model-fitting approach. In a larger context, the study of B[e] supergiants is important for a deeper understanding of the complex structure and evolution of hot, massive stars.