906 resultados para Timed and Probabilistic Automata
Resumo:
OBJECTIVE: To assess rates of offering and uptake of HIV testing and their predictors among women who attended prenatal care. METHODS: A population-based cross-sectional study was conducted among postpartum women (N=2,234) who attended at least one prenatal care visit in 12 cities. Independent and probabilistic samples were selected in the cities studied. Sociodemographic data, information about prenatal care and access to HIV prevention interventions during the current pregnancy were collected. Bivariate and multivariate analyses were carried out to assess independent effects of the covariates on offering and uptake of HIV testing. Data collection took place between November 1999 and April 2000. RESULTS: Overall, 77.5% of the women reported undergoing HIV testing during the current pregnancy. Offering of HIV testing was positively associated with: previous knowledge about prevention of mother-to-child transmission of HIV; higher number of prenatal care visits; higher level of education and being white. HIV testing acceptance rate was 92.5%. CONCLUSIONS: The study results indicate that dissemination of information about prevention of mother-to-child transmission among women may contribute to increasing HIV testing coverage during pregnancy. Non-white women with lower level of education should be prioritized. Strategies to increase attendance of vulnerable women to prenatal care and to raise awareness among health care workers are of utmost importance.
Resumo:
This work studies the combination of safe and probabilistic reasoning through the hybridization of Monte Carlo integration techniques with continuous constraint programming. In continuous constraint programming there are variables ranging over continuous domains (represented as intervals) together with constraints over them (relations between variables) and the goal is to find values for those variables that satisfy all the constraints (consistent scenarios). Constraint programming “branch-and-prune” algorithms produce safe enclosures of all consistent scenarios. Special proposed algorithms for probabilistic constraint reasoning compute the probability of sets of consistent scenarios which imply the calculation of an integral over these sets (quadrature). In this work we propose to extend the “branch-and-prune” algorithms with Monte Carlo integration techniques to compute such probabilities. This approach can be useful in robotics for localization problems. Traditional approaches are based on probabilistic techniques that search the most likely scenario, which may not satisfy the model constraints. We show how to apply our approach in order to cope with this problem and provide functionality in real time.
Resumo:
Soil slope instability concerning highway infrastructure is an ongoing problem in Iowa, as slope failures endanger public safety and continue to result in costly repair work. Characterization of slope failures is complicated, because the factors affecting slope stability can be difficult to discern and measure, particularly soil shear strength parameters. While in the past extensive research has been conducted on slope stability investigations and analysis, this research consists of field investigations addressing both the characterization and reinforcement of such slope failures. The current research focuses on applying an infrequently-used testing technique comprised of the Borehole Shear Test (BST). This in-situ test rapidly provides effective (i.e., drained) shear strength parameter values of soil. Using the BST device, fifteen Iowa slopes (fourteen failures and one proposed slope) were investigated and documented. Particular attention was paid to highly weathered shale and glacial till soil deposits, which have both been associated with slope failures in the southern Iowa drift region. Conventional laboratory tests including direct shear tests, triaxial compression tests, and ring shear tests were also performed on undisturbed and reconstituted soil samples to supplement BST results. The shear strength measurements were incorporated into complete evaluations of slope stability using both limit equilibrium and probabilistic analyses. The research methods and findings of these investigations are summarized in Volume 1 of this report. Research details of the independent characterization and reinforcement investigations are provided in Volumes 2 and 3, respectively. Combined, the field investigations offer guidance on identifying the factors that affect slope stability at a particular location and also on designing slope reinforcement using pile elements for cases where remedial measures are necessary. The research findings are expected to benefit civil and geotechnical engineers of government transportation agencies, consultants, and contractors dealing with slope stability, slope remediation, and geotechnical testing in Iowa.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal
Resumo:
-
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Resumo:
Some investigations on the spectral and statistical characteristics of deep water waves are available for Indian waters. But practically no systematic investigation on the shallow water wave spectral and probabilistic characteristics is made for any part of the Indian coast except for a few restricted studies. Hence a comprehensive study of the shallow water wave climate and their spectral and statistical characteristics for a location (Alleppey) along the southwest coast of India is undertaken based on recorded data. The results of the investigation are presented in this thesis.The thesis comprises of seven chapters
Resumo:
We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes are then estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. The proposed method is applied to a data set of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. We further augment the shape model to incorporate structural features of interest; unknown structural parameters for a novel set of contours are then inferred via the Bayesian reconstruction process. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a data set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
Resumo:
We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal
Resumo:
Background: This study describes a bioinformatics approach designed to identify Plasmodium vivax proteins potentially involved in reticulocyte invasion. Specifically, different protein training sets were built and tuned based on different biological parameters, such as experimental evidence of secretion and/or involvement in invasion-related processes. A profile-based sequence method supported by hidden Markov models (HMMs) was then used to build classifiers to search for biologically-related proteins. The transcriptional profile of the P. vivax intra-erythrocyte developmental cycle was then screened using these classifiers. Results: A bioinformatics methodology for identifying potentially secreted P. vivax proteins was designed using sequence redundancy reduction and probabilistic profiles. This methodology led to identifying a set of 45 proteins that are potentially secreted during the P. vivax intra-erythrocyte development cycle and could be involved in cell invasion. Thirteen of the 45 proteins have already been described as vaccine candidates; there is experimental evidence of protein expression for 7 of the 32 remaining ones, while no previous studies of expression, function or immunology have been carried out for the additional 25. Conclusions: The results support the idea that probabilistic techniques like profile HMMs improve similarity searches. Also, different adjustments such as sequence redundancy reduction using Pisces or Cd-Hit allowed data clustering based on rational reproducible measurements. This kind of approach for selecting proteins with specific functions is highly important for supporting large-scale analyses that could aid in the identification of genes encoding potential new target antigens for vaccine development and drug design. The present study has led to targeting 32 proteins for further testing regarding their ability to induce protective immune responses against P. vivax malaria.
Resumo:
The Chartered Institute of Building Service Engineers (CIBSE) produced a technical memorandum (TM36) presenting research on future climate impacting building energy use and thermal comfort. One climate projection for each of four CO2 emissions scenario were used in TM36, so providing a deterministic outlook. As part of the UK Climate Impacts Programme (UKCIP) probabilistic climate projections are being studied in relation to building energy simulation techniques. Including uncertainty in climate projections is considered an important advance to climate impacts modelling and is included in the latest UKCIP data (UKCP09). Incorporating the stochastic nature of these new climate projections in building energy modelling requires a significant increase in data handling and careful statistical interpretation of the results to provide meaningful conclusions. This paper compares the results from building energy simulations when applying deterministic and probabilistic climate data. This is based on two case study buildings: (i) a mixed-mode office building with exposed thermal mass and (ii) a mechanically ventilated, light-weight office building. Building (i) represents an energy efficient building design that provides passive and active measures to maintain thermal comfort. Building (ii) relies entirely on mechanical means for heating and cooling, with its light-weight construction raising concern over increased cooling loads in a warmer climate. Devising an effective probabilistic approach highlighted greater uncertainty in predicting building performance, depending on the type of building modelled and the performance factors under consideration. Results indicate that the range of calculated quantities depends not only on the building type but is strongly dependent on the performance parameters that are of interest. Uncertainty is likely to be particularly marked with regard to thermal comfort in naturally ventilated buildings.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
Integer carrier phase ambiguity resolution is the key to rapid and high-precision global navigation satellite system (GNSS) positioning and navigation. As important as the integer ambiguity estimation, it is the validation of the solution, because, even when one uses an optimal, or close to optimal, integer ambiguity estimator, unacceptable integer solution can still be obtained. This can happen, for example, when the data are degraded by multipath effects, which affect the real-valued float ambiguity solution, conducting to an incorrect integer (fixed) ambiguity solution. Thus, it is important to use a statistic test that has a correct theoretical and probabilistic base, which has became possible by using the Ratio Test Integer Aperture (RTIA) estimator. The properties and underlying concept of this statistic test are shortly described. An experiment was performed using data with and without multipath. Reflector objects were placed surrounding the receiver antenna aiming to cause multipath. A method based on multiresolution analysis by wavelet transform is used to reduce the multipath of the GPS double difference (DDs) observations. So, the objective of this paper is to compare the ambiguity resolution and validation using data from these two situations: data with multipath and with multipath reduced by wavelets. Additionally, the accuracy of the estimated coordinates is also assessed by comparing with the ground truth coordinates, which were estimated using data without multipath effects. The success and fail probabilities of the RTIA were, in general, coherent and showed the efficiency and the reliability of this statistic test. After multipath mitigation, ambiguity resolution becomes more reliable and the coordinates more precise. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
OBJECTIVE: To evaluate effects of racemic ketamine and S-ketamine in gazelles. ANIMALS: 21 male gazelles (10 Rheem gazelles [Gazella subgutturosa marica] and 11 Subgutturosa gazelles [Gazella subgutturosa subgutturosa]), 6 to 67 months old and weighing (mean+/-SD) 19 +/- 3 kg. PROCEDURES: In a randomized, blinded crossover study, a combination of medetomidine (80 mug/kg) with racemic ketamine (5 mg/kg) or S-ketamine (3 mg/kg) was administered i.m.. Heart rate, blood pressure, respiratory rate, rectal temperature, and oxygen saturation (determined by means of pulse oximetry) were measured. An evaluator timed and scored induction of, maintenance of, and recovery from anesthesia. Medetomidine was reversed with atipamezole. The alternate combination was used after a 4-day interval. Comparisons between groups were performed with Wilcoxon signed rank and paired t tests. RESULTS: Anesthesia induction was poor in 2 gazelles receiving S-ketamine, but other phases of anesthesia were uneventful. A dominant male required an additional dose of S-ketamine (0.75 mg/kg, i.m.). After administration of atipamezole, gazelles were uncoordinated for a significantly shorter period with S-ketamine than with racemic ketamine. Recovery quality was poor in 3 gazelles with racemic ketamine. No significant differences between treatments were found for any other variables. Time from drug administration to antagonism was similar between racemic ketamine (44.5 to 53.0 minutes) and S-ketamine (44.0 to 50.0 minutes). CONCLUSIONS AND CLINICAL RELEVANCE: Administration of S-ketamine at a dose 60% that of racemic ketamine resulted in poorer induction of anesthesia, an analogous degree of sedation, and better recovery from anesthesia in gazelles with unremarkable alterations in physiologic variables, compared with racemic ketamine.