934 resultados para PROBABILISTIC TELEPORTATION
Resumo:
Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problein in damage assessment. A recently developed C-0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
Mechanistic determinants of bacterial growth, death, and spread within mammalian hosts cannot be fully resolved studying a single bacterial population. They are also currently poorly understood. Here, we report on the application of sophisticated experimental approaches to map spatiotemporal population dynamics of bacteria during an infection. We analyzed heterogeneous traits of simultaneous infections with tagged Salmonella enterica populations (wild-type isogenic tagged strains [WITS]) in wild-type and gene-targeted mice. WITS are phenotypically identical but can be distinguished and enumerated by quantitative PCR, making it possible, using probabilistic models, to estimate bacterial death rate based on the disappearance of strains through time. This multidisciplinary approach allowed us to establish the timing, relative occurrence, and immune control of key infection parameters in a true host-pathogen combination. Our analyses support a model in which shortly after infection, concomitant death and rapid bacterial replication lead to the establishment of independent bacterial subpopulations in different organs, a process controlled by host antimicrobial mechanisms. Later, decreased microbial mortality leads to an exponential increase in the number of bacteria that spread locally, with subsequent mixing of bacteria between organs via bacteraemia and further stochastic selection. This approach provides us with an unprecedented outlook on the pathogenesis of S. enterica infections, illustrating the complex spatial and stochastic effects that drive an infectious disease. The application of the novel method that we present in appropriate and diverse host-pathogen combinations, together with modelling of the data that result, will facilitate a comprehensive view of the spatial and stochastic nature of within-host dynamics. © 2008 Grant et al.
Resumo:
Many probabilistic models introduce strong dependencies between variables using a latent multivariate Gaussian distribution or a Gaussian process. We present a new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors. Its key properties are: 1) it has simple, generic code applicable to many models, 2) it has no free parameters, 3) it works well for a variety of Gaussian process based models. These properties make our method ideal for use while model building, removing the need to spend time deriving and tuning updates for more complex algorithms.
Resumo:
Statistical model-based methods are presented for the reconstruction of autocorrelated signals in impulsive plus continuous noise environments. Signals are modelled as autoregressive and noise sources as discrete and continuous mixtures of Gaussians, allowing for robustness in highly impulsive and non-Gaussian environments. Markov Chain Monte Carlo methods are used for reconstruction of the corrupted waveforms within a Bayesian probabilistic framework and results are presented for contaminated voice and audio signals.
Resumo:
Resumen: Este estudio se propuso analizar la relación del optimismo con ansiedad, depresión y calidad de vida global en una muestra de 45 pacientes oncológicos, 34% hombres y 66% mujeres con un promedio de 50,5 años y un rango de 21 a 74 años. Según la localización de la neoplasia se conformó una muestra mixta. Para este trabajo se realizó un muestreo no probabilístico con control de las variables: estadio de la enfermedad, fase y tipo de tratamiento médico/clínico, equipo médico actuante y enfermedades concomitantes. Cumplió con las normas éticas establecidas (FePRA, 2013; Helsinki, 2008). Para evaluar optimismo se administró la versión española del LOT-R (Otero, Luengo, Romero y Castro, 1998), para Calidad de Vida el cuestionario FACT-G en su 4ta versión (Cella, Tulsky, Gray, 1993) y para ansiedad y depresión la versión española de la escala HAD (Caro & Ibáñez, 1992). Se encontraron correlaciones significativas entre optimismo con cada una de las variables en estudio y se reportaron diferencias significativas entre los pesimistas y los optimistas en Ansiedad (F= 6.35, p = .015); Depresión (F= 5.30, p = .026) y Calidad de Vida (F= 8.99, p = .004).
Resumo:
This paper compares parallel and distributed implementations of an iterative, Gibbs sampling, machine learning algorithm. Distributed implementations run under Hadoop on facility computing clouds. The probabilistic model under study is the infinite HMM [1], in which parameters are learnt using an instance blocked Gibbs sampling, with a step consisting of a dynamic program. We apply this model to learn part-of-speech tags from newswire text in an unsupervised fashion. However our focus here is on runtime performance, as opposed to NLP-relevant scores, embodied by iteration duration, ease of development, deployment and debugging. © 2010 IEEE.
Resumo:
Resumen: Michael Behe y William Dembski son dos de los líderes de la Teoría del Diseño Inteligente, una propuesta surgida como respuesta a los modelos evolucionistas y anti-finalistas prevalentes en ciertos ambientes académicos e intelectuales, especialmente del mundo anglosajón. Las especulaciones de Behe descansan en el concepto de “sistema de complejidad irreductible”, entendido como un conjunto ordenado de partes cuya funcionalidad depende estrictamente de su indemnidad estructural, y que su origen resulta, por tanto, refractario a explicaciones gradualistas. Estos sistemas, según Behe, están presentes en los vivientes, lo que permitiría inferir que ellos no son el producto de mecanismos ciegos y azarosos, sino el resultado de un diseño. Dembski, por su parte, ha abordado el problema desde una perspectiva más cuantitativa, desarrollando un algoritmo probabilístico conocido como “filtro explicatorio”, que permitiría, según el autor, inferir científicamente la presencia de un diseño, tanto en entidades artificiales como naturales. Trascendiendo las descalificaciones del neodarwinismo, examinamos la propuesta de estos autores desde los fundamentos filosóficos de la escuela tomista. A nuestro parecer, hay en el trabajo de estos autores algunas intuiciones valiosas, las que sin embargo suelen pasar desapercibidas por la escasa formalidad en que vienen presentadas, y por la aproximación eminentemente mecanicista y artefactual con que ambos enfrentan la cuestión. Es precisamente a la explicitación de tales intuiciones a las que se dirige el artículo.
Resumo:
Cell adhesion, mediated by specific receptor-ligand interactions, plays an important role in biological processes such as tumor metastasis and inflammatory cascade. For example, interactions between beta(2)-integrin ( lymphocyte function-associated antigen-1 and/or Mac-1) on polymorphonuclear neutrophils (PMNs) and ICAM-1 on melanoma cells initiate the bindings of melanoma cells to PMNs within the tumor microenvironment in blood flow, which in turn activate PMN-melanoma cell aggregation in a near-wall region of the vascular endothelium, therefore enhancing subsequent extravasation of melanoma cells in the microcirculations. Kinetics of integrin-ligand bindings in a shear flow is the determinant of such a process, which has not been well understood. In the present study, interactions of PMNs with WM9 melanoma cells were investigated to quantify the kinetics of beta(2)-integrin and ICAM-1 bindings using a cone-plate viscometer that generates a linear shear flow combined with a two-color flow cytometry technique. Aggregation fractions exhibited a transition phase where it first increased before 60 s and then decreased with shear durations. Melanoma-PMN aggregation was also found to be inversely correlated with the shear rate. A previously developed probabilistic model was modified to predict the time dependence of aggregation fractions at different shear rates and medium viscosities. Kinetic parameters of beta(2)-integrin and ICAM-1 bindings were obtained by individual or global fittings, which were comparable to respectively published values. These findings provide new quantitative understanding of the biophysical basis of leukocyte-tumor cell interactions mediated by specific receptor-ligand interactions under shear flow conditions.
Resumo:
[ES] En el presente artículo se aborda el problema de la causalidad en las Ciencias Económicas. Partiendo de la diferenciación entre ciencias duras y blandas, y de la supuesta clasificicación de la economía en este segundo grupo, se realiza un análisis de los diferentes paradigmas ontológicos que soportan la investigación científica, para a continuación trasladar la causalidad desde el ámbito ontológico al gnoseológico. Posteriormente se profundiza en la conjunción de la metodología hipotético-deductiva con el método correlacional, generando una causalidad probabilística. Dicha causalidad, contextualizada de forma científica, permite la realización y contrastación de inferencias predictivas, a través de las cuales las Ciencias Ecnómicas pueden encontrar su ubicación en los niveles mas extrictos de la investigación científica.
Resumo:
131 p.: graf.
Resumo:
Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.
Resumo:
As defined, the modeling procedure is quite broad. For example, the chosen compartments may contain a single organism, a population of organisms, or an ensemble of populations. A population compartment, in turn, could be homogeneous or possess structure in size or age. Likewise, the mathematical statements may be deterministic or probabilistic in nature, linear or nonlinear, autonomous or able to possess memory. Examples of all types appear in the literature. In practice, however, ecosystem modelers have focused upon particular types of model constructions. Most analyses seem to treat compartments which are nonsegregated (populations or trophic levels) and homogeneous. The accompanying mathematics is, for the most part, deterministic and autonomous. Despite the enormous effort which has gone into such ecosystem modeling, there remains a paucity of models which meets the rigorous &! validation criteria which might be applied to a model of a mechanical system. Most ecosystem models are short on prediction ability. Even some classical examples, such as the Lotka-Volterra predator-prey scheme, have not spawned validated examples.
Resumo:
Introduction: The National Oceanic and Atmospheric Administration’s Biogeography Branch has conducted surveys of reef fish in the Caribbean since 1999. Surveys were initially undertaken to identify essential fish habitat, but later were used to characterize and monitor reef fish populations and benthic communities over time. The Branch’s goals are to develop knowledge and products on the distribution and ecology of living marine resources and provide resource managers, scientists and the public with an improved ecosystem basis for making decisions. The Biogeography Branch monitors reef fishes and benthic communities in three study areas: (1) St. John, USVI, (2) Buck Island, St. Croix, USVI, and (3) La Parguera, Puerto Rico. In addition, the Branch has characterized the reef fish and benthic communities in the Flower Garden Banks National Marine Sanctuary, Gray’s Reef National Marine Sanctuary and around the island of Vieques, Puerto Rico. Reef fish data are collected using a stratified random sampling design and stringent measurement protocols. Over time, the sampling design has changed in order to meet different management objectives (i.e. identification of essential fish habitat vs. monitoring), but the designs have always remained: • Probabilistic – to allow inferences to a larger targeted population, • Objective – to satisfy management objectives, and • Stratified – to reduce sampling costs and obtain population estimates for strata. There are two aspects of the sampling design which are now under consideration and are the focus of this report: first, the application of a sample frame, identified as a set of points or grid elements from which a sample is selected; and second, the application of subsampling in a two-stage sampling design. To evaluate these considerations, the pros and cons of implementing a sampling frame and subsampling are discussed. Particular attention is paid to the impacts of each design on accuracy (bias), feasibility and sampling cost (precision). Further, this report presents an analysis of data to determine the optimal number of subsamples to collect if subsampling were used. (PDF contains 19 pages)