942 resultados para Stochastic simulation algorithm


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência e Tecnologia de Materiais - FC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Irrigação e Drenagem) - FCA

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Im ersten Teil der Arbeit wurde das Bindungsverhalten von Annexin A1 und Annexin A2t an festkörperunterstützte Lipidmembranen aus POPC und POPS untersucht. Für beide Proteine konnte mit Hilfe der Fluoreszenzmikroskopie gezeigt werden, dass irreversible Bindung nur in Anwesenheit von POPS auftritt. Durch rasterkraftmikroskopische Aufnahmen konnte die laterale Organisation der Annexine auf der Lipidmembran dargestellt werden. Beide Proteine lagern sich in Form lateraler Aggregate (zweidimensionale Domänen) auf der Oberfläche an, außerdem ist der Belegungsgrad und die Größe der Domänen von der Membranzusammensetzung und der Calciumkonzentration abhängig. Mit zunehmendem POPS-Gehalt und Calciumkonzentration steigt der Belegungsgrad an und der mittlere Domänenradius wird kleiner. Diese Ergebnisse konnten in Verbindung mit detaillierten Bindungsstudien des Annexins A1 mit der Quarzmikrowaage verwendet werden, um ein Bindungsmodell auf Basis einer heterogenen Oberfläche zu entwickeln. Auf einer POPC-reichen Matrix findet reversible Adsorption statt und auf POPS-reichen Domänen irreversible Adsorption. Durch die Anpassung von dynamischen Monte Carlo-Simulationen basierend auf einer zweidimensionalen zufälligen sequentiellen Adsorption konnten Erkenntnisse über die Membranstruktur und die kinetischen Ratenkonstanten in Abhängigkeit von der Calciumkonzentration und der Inkubationszeit des Proteins gewonnen werden. Die irreversible Bindung ist in allen Calciumkonzentrationsbereichen schneller als die reversible. Außerdem zeigt die irreversible Adsorption eine deutlich stärkere Abhängigkeit von der Calciumkonzentration. Ein kleinerer Belegungsgrad bei niedrigen Ca2+-Gehalten ist hauptsächlich durch die Abnahme der verfügbaren Bindungsplätze auf der Oberfläche zu erklären. Die gute Übereinstimmung der aus den Monte Carlo-Simulationen erhaltenen Domänenstrukturen mit den rasterkraftmikroskopischen Aufnahmen und die Tatsache, dass sich die simulierten Resonanzfrequenzverläufe problemlos an die experimentellen Kurven aus den QCM-Messungen anpassen ließen, zeigt die gute Anwendbarkeit des entwickelten Simulationsprogramms auf die Adsorption von Annexin A1. Die Extraktion der kinetischen Parameter aus dem zweidimensionalen RSA-Modell ist mit Sicherheit einem einfachen Langmuir-Ansatz überlegen. Bei einem Langmuir-Modell erfolgt eine integrale Erfassung einer einzelnen makroskopischen Geschwindigkeitskonstante, während durch das RSA-Modell eine differenzierte Betrachtung des reversiblen und irreversiblen Bindungsprozesses möglich ist. Zusätzlich lassen sich mikroskopische Informationen über die Oberflächenbeschaffenheit gewinnen. Im zweiten Teil der Arbeit wurde das thermotrope Phasenverhalten von festkörperunterstützten Phospholipidbilayern untersucht. Dazu wurden mikrostrukturierte, frei stehende Membranstreifen präpariert und mit Hilfe der bildgebenden Ellipsometrie untersucht. Dadurch konnten die temperaturabhängigen Verläufe der Schichtdicke und der lateralen Membranausdehnung parallel beobachtet werden. Die ermittelten Phasenübergangstemperaturen von DMPC, diC15PC und DPPC lagen 2 - 3 °C oberhalb der Literaturwerte für vesikuläre Systeme. Außerdem wurde eine deutliche Verringerung der Kooperativität der Phasenumwandlung gefunden, was auf einen großen Einfluss des Substrats bei den festkörperunterstützten Lipidmembranen schließen lässt. Zusätzlich wurde ein nicht systematischer Zusammenhang der Ergebnisse von der Oberflächenpräparation gefunden, der es unabdingbar macht, bei Untersuchungen von festkörperunterstützten Substraten einen internen Standard einzuführen. Bei der Analyse des thermotropen Phasenübergangsverhaltens von DMPC/Cholesterol - Gemischen wurde daher die individuelle Adressierbarkeit der strukturierten Lipidmembranen ausgenutzt und ein Lipidstreifen aus reinem DMPC als Standard verwendet. Auf diese Weise konnte gezeigt werden, dass das für Phospholipide typische Phasenübergangsverhalten ab 30 mol% Cholesterol in der Membran nicht mehr vorhanden ist. Dies ist auf die Bildung einer nur durch höhere Sterole induzierten fluiden Phase mit hoch geordneten Acylketten zurückzuführen. Abschließend konnte durch die Zugabe von Ethanol zu einer mikrostrukturierten DMPC-Membran die Bildung eines interdigitierten Bilayers nachgewiesen werden. Die bildgebende Ellipsometrie ist eine sehr gute Methode zur Untersuchung festkörperunterstützter Lipidmembranen, da sie über ein sehr gutes vertikales und ein ausreichendes laterales Auflösungsvermögen besitzt. Sie ist darin zwar einem Rasterkraftmikroskop noch unterlegen, besitzt dafür aber eine einfachere Handhabung beim Umgang mit Flüssigkeiten und in der Temperierung, eine schnellere Bildgebung und ist als optische Methode nicht-invasiv.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference. DESIGN Mathematical modelling study based on data from ART programmes. METHODS We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained. RESULTS RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74-1.03) in scenario A, 0.94 (0.77-1.02) with delayed switching (scenario B) and 0.80 (0.44-1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality. CONCLUSIONS VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study assessed the economic efficiency of different strategies for the control of post-weaning multi-systemic wasting syndrome (PMWS) and porcine circovirus type 2 subclinical infection (PCV2SI), which have a major economic impact on the pig farming industry worldwide. The control strategies investigated consisted on the combination of up to 5 different control measures. The control measures considered were: (1) PCV2 vaccination of piglets (vac); (2) ensuring age adjusted diet for growers (diets); (3) reduction of stocking density (stock); (4) improvement of biosecurity measures (bios); and (5) total depopulation and repopulation of the farm for the elimination of other major pathogens (DPRP). A model was developed to simulate 5 years production of a pig farm with a 3-weekly batch system and with 100 sows. A PMWS/PCV2SI disease and economic model, based on PMWS severity scores, was linked to the production model in order to assess disease losses. This PMWS severity scores depends on the combination post-weaning mortality, PMWS morbidity in younger pigs and proportion of PCV2 infected pigs observed on farms. The economic analysis investigated eleven different farm scenarios, depending on the number of risk factors present before the intervention. For each strategy, an investment appraisal assessed the extra costs and benefits of reducing a given PMWS severity score to the average score of a slightly affected farm. The net present value obtained for each strategy was then multiplied by the corresponding probability of success to obtain an expected value. A stochastic simulation was performed to account for uncertainty and variability. For moderately affected farms PCV2 vaccination alone was the most cost-efficient strategy, but for highly affected farms it was either PCV2 vaccination alone or in combination with biosecurity measures, with the marginal profitability between 'vac' and 'vac+bios' being small. Other strategies such as 'diets', 'vac+diets' and 'bios+diets' were frequently identified as the second or third best strategy. The mean expected values of the best strategy for a moderately and a highly affected farm were £14,739 and £57,648 after 5 years, respectively. This is the first study to compare economic efficiency of control strategies for PMWS and PCV2SI. The results demonstrate the economic value of PCV2 vaccination, and highlight that on highly affected farms biosecurity measures are required to achieve optimal profitability. The model developed has potential as a farm-level decision support tool for the control of this economically important syndrome.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The reconstruction of past flash floods in ungauged basins leads to a high level of uncertainty, which increases if other processes are involved such as the transport of large wood material. An important flash flood occurred in 1997 in Venero Claro (Central Spain), causing significant economic losses. The wood material clogged bridge sections, raising the water level upstream. The aim of this study was to reconstruct this event, analysing the influence of woody debris transport on the flood hazard pattern. Because the reach in question was affected by backwater effects due to bridge clogging, using only high water mark or palaeostage indicators may overestimate discharges, and so other methods are required to estimate peak flows. Therefore, the peak discharge was estimated (123 ± 18 m3 s–1) using indirect methods, but one-dimensional hydraulic simulation was also used to validate these indirect estimates through an iterative process (127 ± 33 m3 s–1) and reconstruct the bridge obstruction to obtain the blockage ratio during the 1997 event (~48%) and the bridge clogging curves. Rainfall–Runoff modelling with stochastic simulation of different rainfall field configurations also helped to confirm that a peak discharge greater than 150 m3 s–1 is very unlikely to occur and that the estimated discharge range is consistent with the estimated rainfall amount (233 ± 27 mm). It was observed that the backwater effect due to the obstruction (water level ~7 m) made the 1997 flood (~35-year return period) equivalent to the 50-year flood. This allowed the equivalent return period to be defined as the recurrence interval of an event of specified magnitude, which, where large woody debris is present, is equivalent in water depth and extent of flooded area to a more extreme event of greater magnitude. These results highlight the need to include obstruction phenomena in flood hazard analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the fermion loop formulation the contributions to the partition function naturally separate into topological equivalence classes with a definite sign. This separation forms the basis for an efficient fermion simulation algorithm using a fluctuating open fermion string. It guarantees sufficient tunnelling between the topological sectors, and hence provides a solution to the fermion sign problem affecting systems with broken supersymmetry. Moreover, the algorithm shows no critical slowing down even in the massless limit and can hence handle the massless Goldstino mode emerging in the supersymmetry broken phase. In this paper – the third in a series of three – we present the details of the simulation algorithm and demonstrate its efficiency by means of a few examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Foot-and-mouth disease (FMD) is a highly contagious disease that caused several large outbreaks in Europe in the last century. The last important outbreak in Switzerland took place in 1965/66 and affected more than 900 premises and more than 50,000 animals were slaughtered. Large-scale emergency vaccination of the cattle and pig population has been applied to control the epidemic. In recent years, many studies have used infectious disease models to assess the impact of different disease control measures, including models developed for diseases exotic for the specific region of interest. Often, the absence of real outbreak data makes a validation of such models impossible. This study aimed to evaluate whether a spatial, stochastic simulation model (the Davis Animal Disease Simulation model) can predict the course of a Swiss FMD epidemic based on the available historic input data on population structure, contact rates, epidemiology of the virus, and quality of the vaccine. In addition, the potential outcome of the 1965/66 FMD epidemic without application of vaccination was investigated. Comparing the model outcomes to reality, only the largest 10% of the simulated outbreaks approximated the number of animals being culled. However, the simulation model highly overestimated the number of culled premises. While the outbreak duration could not be well reproduced by the model compared to the 1965/66 epidemic, it was able to accurately estimate the size of the area infected. Without application of vaccination, the model predicted a much higher mean number of culled animals than with vaccination, demonstrating that vaccination was likely crucial in disease control for the Swiss FMD outbreak in 1965/66. The study demonstrated the feasibility to analyze historical outbreak data with modern analytical tools. However, it also confirmed that predicted epidemics from a most carefully parameterized model cannot integrate all eventualities of a real epidemic. Therefore, decision makers need to be aware that infectious disease models are useful tools to support the decision-making process but their results are not equal valuable as real observations and should always be interpreted with caution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Agent-Based Modelling and simulation (ABM) is a rather new approach for studying complex systems withinteracting autonomous agents that has lately undergone great growth in various fields such as biology, physics, social science, economics and business. Efforts to model and simulate the highly complex cement hydration process have been made over the past 40 years, with the aim of predicting the performance of concrete and designing innovative and enhanced cementitious materials. The ABM presented here - based on previous work - focuses on the early stages of cement hydration by modelling the physical-chemical processes at the particle level. The model considers the cement hydration process as a time and 3D space system, involving multiple diffusing and reacting species of spherical particles. Chemical reactions are simulated by adaptively selecting discrete stochastic simulation for the appropriate reaction, whenever that is necessary. Interactions between particles are also considered. The model has been inspired by reported cellular automata?s approach which provides detailed predictions of cement microstructure at the expense of significant computational difficulty. The ABM approach herein seeks to bring about an optimal balance between accuracy and computational efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Natural regeneration is an ecological key-process that makes plant persistence possible and, consequently, it constitutes an essential element of sustainable forest management. In this respect, natural regeneration in even-aged stands of Pinus pinea L. located in the Spanish Northern Plateau has not always been successfully achieved despite over a century of pine nut-based management. As a result, natural regeneration has recently become a major concern for forest managers when we are living a moment of rationalization of investment in silviculture. The present dissertation is addressed to provide answers to forest managers on this topic through the development of an integral regeneration multistage model for P. pinea stands in the region. From this model, recommendations for natural regeneration-based silviculture can be derived under present and future climate scenarios. Also, the model structure makes it possible to detect the likely bottlenecks affecting the process. The integral model consists of five submodels corresponding to each of the subprocesses linking the stages involved in natural regeneration (seed production, seed dispersal, seed germination, seed predation and seedling survival). The outputs of the submodels represent the transitional probabilities between these stages as a function of climatic and stand variables, which in turn are representative of the ecological factors driving regeneration. At subprocess level, the findings of this dissertation should be interpreted as follows. The scheduling of the shelterwood system currently conducted over low density stands leads to situations of dispersal limitation since the initial stages of the regeneration period. Concerning predation, predator activity appears to be only limited by the occurrence of severe summer droughts and masting events, the summer resulting in a favourable period for seed survival. Out of this time interval, predators were found to almost totally deplete seed crops. Given that P. pinea dissemination occurs in summer (i.e. the safe period against predation), the likelihood of a seed to not be destroyed is conditional to germination occurrence prior to the intensification of predator activity. However, the optimal conditions for germination seldom take place, restraining emergence to few days during the fall. Thus, the window to reach the seedling stage is narrow. In addition, the seedling survival submodel predicts extremely high seedling mortality rates and therefore only some individuals from large cohorts will be able to persist. These facts, along with the strong climate-mediated masting habit exhibited by P. pinea, reveal that viii the overall probability of establishment is low. Given this background, current management –low final stand densities resulting from intense thinning and strict felling schedules– conditions the occurrence of enough favourable events to achieve natural regeneration during the current rotation time. Stochastic simulation and optimisation computed through the integral model confirm this circumstance, suggesting that more flexible and progressive regeneration fellings should be conducted. From an ecological standpoint, these results inform a reproductive strategy leading to uneven-aged stand structures, in full accordance with the medium shade-tolerant behaviour of the species. As a final remark, stochastic simulations performed under a climate-change scenario show that regeneration in the species will not be strongly hampered in the future. This resilient behaviour highlights the fundamental ecological role played by P. pinea in demanding areas where other tree species fail to persist.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bistability and switching are two important aspects of the genetic regulatory network of phage. Positive and negative feedbacks are key regulatory mechanisms in this network. By the introduction of threshold values, the developmental pathway of A phage is divided into different stages. If the protein level reaches a threshold value, positive or negative feedback will be effective and regulate the process of development. Using this regulatory mechanism, we present a quantitative model to realize bistability and switching of phage based on experimental data. This model gives descriptions of decisive mechanisms for different pathways in induction. A stochastic model is also introduced for describing statistical properties of switching in induction. A stochastic degradation rate is used to represent intrinsic noise in induction for switching the system from the lysogenic pathway to the lysis pathway. The approach in this paper represents an attempt to describe the regulatory mechanism in genetic regulatory network under the influence of intrinsic noise in the framework of continuous models. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a genetic programming (GP) approach for evolving genetic networks that demonstrate desired dynamics when simulated as a discrete stochastic process. Our representation of genetic networks is based on a biochemical reaction model including key elements such as transcription, translation and post-translational modifications. The stochastic, reaction-based GP system is similar but not identical with algorithmic chemistries. We evolved genetic networks with noisy oscillatory dynamics. The results show the practicality of evolving particular dynamics in gene regulatory networks when modelled with intrinsic noise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years there has been a great effort to combine the technologies and techniques of GIS and process models. This project examines the issues of linking a standard current generation 2½d GIS with several existing model codes. The focus for the project has been the Shropshire Groundwater Scheme, which is being developed to augment flow in the River Severn during drought periods by pumping water from the Shropshire Aquifer. Previous authors have demonstrated that under certain circumstances pumping could reduce the soil moisture available for crops. This project follows earlier work at Aston in which the effects of drawdown were delineated and quantified through the development of a software package that implemented a technique which brought together the significant spatially varying parameters. This technique is repeated here, but using a standard GIS called GRASS. The GIS proved adequate for the task and the added functionality provided by the general purpose GIS - the data capture, manipulation and visualisation facilities - were of great benefit. The bulk of the project is concerned with examining the issues of the linkage of GIS and environmental process models. To this end a groundwater model (Modflow) and a soil moisture model (SWMS2D) were linked to the GIS and a crop model was implemented within the GIS. A loose-linked approach was adopted and secondary and surrogate data were used wherever possible. The implications of which relate to; justification of a loose-linked versus a closely integrated approach; how, technically, to achieve the linkage; how to reconcile the different data models used by the GIS and the process models; control of the movement of data between models of environmental subsystems, to model the total system; the advantages and disadvantages of using a current generation GIS as a medium for linking environmental process models; generation of input data, including the use of geostatistic, stochastic simulation, remote sensing, regression equations and mapped data; issues of accuracy, uncertainty and simply providing adequate data for the complex models; how such a modelling system fits into an organisational framework.