903 resultados para Bayesian inference, Behaviour analysis, Security, Visual surveillance
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Objective To evaluate the occurrence of severe obstetric complications associated with antepartum and intrapartum hemorrhage among women from the Brazilian Network for Surveillance of Severe Maternal Morbidity.Design Multicenter cross-sectional study.Setting Twenty-seven obstetric referral units in Brazil between July 2009 and June 2010.Population A total of 9555 women categorized as having obstetric complications.Methods The occurrence of potentially life-threatening conditions, maternal near miss and maternal deaths associated with antepartum and intrapartum hemorrhage was evaluated. Sociodemographic and obstetric characteristics and the use of criteria for management of severe bleeding were also assessed in these women.Main outcome measures The prevalence ratios with their respective 95% confidence intervals adjusted for the cluster effect of the design, and multiple logistic regression analysis were performed to identify factors independently associated with the occurrence of severe maternal outcome.Results Antepartum and intrapartum hemorrhage occurred in only 8% (767) of women experiencing any type of obstetric complication. However, it was responsible for 18.2% (140) of maternal near miss and 10% (14) of maternal death cases. On multivariate analysis, maternal age and previous cesarean section were shown to be independently associated with an increased risk of severe maternal outcome (near miss or death).Conclusion Severe maternal outcome due to antepartum and intrapartum hemorrhage was highly prevalent among Brazilian women. Certain risk factors, maternal age and previous cesarean delivery in particular, were associated with the occurrence of bleeding.
Resumo:
We measured the effects of epilepsy on visual contrast sensitivity to linear and vertical sine-wave gratings. Sixteen female adults, aged 21 to 50 years, comprised the sample in this study, including eight adults with generalized tonic-clonic seizure-type epilepsy and eight age-matched controls without epilepsy. Contrast threshold was measured using a temporal two-alternative forced-choice binocular psychophysical method at a distance of 150 cm from the stimuli, with a mean luminance of 40.1 cd/m². A one-way analysis of variance (ANOVA) applied to the linear contrast threshold showed significant differences between groups (F[3,188] = 14.829; p < .05). Adults with epilepsy had higher contrast thresholds (1.45, 1.04, and 1.18 times for frequencies of 0.25, 2.0, and 8.0 cycles per degree of visual angle, respectively). The Tukey Honestly Significant Difference post hoc test showed significant differences (p < .05) for all of the tested spatial frequencies. The largest difference between groups was in the lowest spatial frequency. Therefore, epilepsy may cause more damage to the neural pathways that process low spatial frequencies. However, epilepsy probably alters both the magnocellular visual pathway, which processes low spatial frequencies, and the parvocellular visual pathway, which processes high spatial frequencies. The experimental group had lower visual contrast sensitivity to all tested spatial frequencies.
Resumo:
Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.
Resumo:
Carrying out information about the microstructure and stress behaviour of ferromagnetic steels, magnetic Barkhausen noise (MBN) has been used as a basis for effective non-destructive testing methods, opening new areas in industrial applications. One of the factors that determines the quality and reliability of the MBN analysis is the way information is extracted from the signal. Commonly, simple scalar parameters are used to characterize the information content, such as amplitude maxima and signal root mean square. This paper presents a new approach based on the time-frequency analysis. The experimental test case relates the use of MBN signals to characterize hardness gradients in a AISI4140 steel. To that purpose different time-frequency (TFR) and time-scale (TSR) representations such as the spectrogram, the Wigner-Ville distribution, the Capongram, the ARgram obtained from an AutoRegressive model, the scalogram, and the Mellingram obtained from a Mellin transform are assessed. It is shown that, due to nonstationary characteristics of the MBN, TFRs can provide a rich and new panorama of these signals. Extraction techniques of some time-frequency parameters are used to allow a diagnostic process. Comparison with results obtained by the classical method highlights the improvement on the diagnosis provided by the method proposed.
Resumo:
Background: With nearly 1,100 species, the fish family Characidae represents more than half of the species of Characiformes, and is a key component of Neotropical freshwater ecosystems. The composition, phylogeny, and classification of Characidae is currently uncertain, despite significant efforts based on analysis of morphological and molecular data. No consensus about the monophyly of this group or its position within the order Characiformes has been reached, challenged by the fact that many key studies to date have non-overlapping taxonomic representation and focus only on subsets of this diversity. Results: In the present study we propose a new definition of the family Characidae and a hypothesis of relationships for the Characiformes based on phylogenetic analysis of DNA sequences of two mitochondrial and three nuclear genes (4,680 base pairs). The sequences were obtained from 211 samples representing 166 genera distributed among all 18 recognized families in the order Characiformes, all 14 recognized subfamilies in the Characidae, plus 56 of the genera so far considered incertae sedis in the Characidae. The phylogeny obtained is robust, with most lineages significantly supported by posterior probabilities in Bayesian analysis, and high bootstrap values from maximum likelihood and parsimony analyses. Conclusion: A monophyletic assemblage strongly supported in all our phylogenetic analysis is herein defined as the Characidae and includes the characiform species lacking a supraorbital bone and with a derived position of the emergence of the hyoid artery from the anterior ceratohyal. To recognize this and several other monophyletic groups within characiforms we propose changes in the limits of several families to facilitate future studies in the Characiformes and particularly the Characidae. This work presents a new phylogenetic framework for a speciose and morphologically diverse group of freshwater fishes of significant ecological and evolutionary importance across the Neotropics and portions of Africa.
Resumo:
Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.
Resumo:
Creation of cold dark matter (CCDM) can macroscopically be described by a negative pressure, and, therefore, the mechanism is capable to accelerate the Universe, without the need of an additional dark energy component. In this framework, we discuss the evolution of perturbations by considering a Neo-Newtonian approach where, unlike in the standard Newtonian cosmology, the fluid pressure is taken into account even in the homogeneous and isotropic background equations (Lima, Zanchin, and Brandenberger, MNRAS 291, L1, 1997). The evolution of the density contrast is calculated in the linear approximation and compared to the one predicted by the Lambda CDM model. The difference between the CCDM and Lambda CDM predictions at the perturbative level is quantified by using three different statistical methods, namely: a simple chi(2)-analysis in the relevant space parameter, a Bayesian statistical inference, and, finally, a Kolmogorov-Smirnov test. We find that under certain circumstances, the CCDM scenario analyzed here predicts an overall dynamics (including Hubble flow and matter fluctuation field) which fully recovers that of the traditional cosmic concordance model. Our basic conclusion is that such a reduction of the dark sector provides a viable alternative description to the accelerating Lambda CDM cosmology.
Resumo:
Base-level maps (or ""isobase maps"", as originally defined by Filosofov, 1960), express a relationship between valley order and topography. The base-level map can be seen as a ""simplified"" version of the original topographic surface, from which the ""noise"" of the low-order stream erosion was removed. This method is able to identify areas with possible tectonic influence even within lithologically uniform domains. Base-level maps have been recently applied in semi-detail scale (e.g., 1:50 000 or larger) morphotectonic analysis. In this paper, we present an evaluation of the method's applicability in regional-scale analysis (e.g., 1:250 000 or smaller). A test area was selected in northern Brazil, at the lower course of the Araguaia and Tocantins rivers. The drainage network extracted from SRTM30_PLUS DEMs with spatial resolution of approximately 900 m was visually compared with available topographic maps and considered to be compatible with a 1:1,000 000 scale. Regarding the interpretation of regional-scale morphostructures, the map constructed with 2nd and 3rd-order valleys was considered to present the best results. Some of the interpreted base-level anomalies correspond to important shear zones and geological contacts present in the 1:5 000 000 Geological Map of South America. Others have no correspondence with mapped Precambrian structures and are considered to represent younger, probably neotectonic, features. A strong E-W orientation of the base-level lines over the inflexion of the Araguaia and Tocantins rivers, suggest a major drainage capture. A N-S topographic swath profile over the Tocantins and Araguaia rivers reveals a topographic pattern which, allied with seismic data showing a roughly N-S direction of extension in the area, lead us to interpret this lineament as an E-W, southward-dipping normal fault. There is also a good visual correspondence between the base-level lineaments and geophysical anomalies. A NW-SE lineament in the southeast of the study area partially corresponds to the northern border of the Mosquito lava field, of Jurassic age, and a NW-SE lineament traced in the northeastern sector of the study area can be interpreted as the Picos-Santa Ines lineament, identifiable in geophysical maps but with little expression in hypsometric or topographic maps.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.
Resumo:
A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macronutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic, A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 mu s integration time gate, 1.1 mu s delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem. (C) 2009 Elsevier B.V. All rights reserved.