151 resultados para Probabilistic cellular automata

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many studies have forecasted the possible impact of climate change on plant distribution using models based on ecological niche theory. In their basic implementation, niche-based models do not constrain predictions by dispersal limitations. Hence, most niche-based modelling studies published so far have assumed dispersal to be either unlimited or null. However, depending on the rate of climatic change, the landscape fragmentation and the dispersal capabilities of individual species, these assumptions are likely to prove inaccurate, leading to under- or overestimation of future species distributions and yielding large uncertainty between these two extremes. As a result, the concepts of "potentially suitable" and "potentially colonisable" habitat are expected to differ significantly. To quantify to what extent these two concepts can differ, we developed MIGCLIM, a model simulating plant dispersal under climate change and landscape fragmentation scenarios. MIGCLIM implements various parameters, such as dispersal distance, increase in reproductive potential over time, barriers to dispersal or long distance dispersal. Several simulations were run for two virtual species in a study area of the western Swiss Alps, by varying dispersal distance and other parameters. Each simulation covered the hundred-year period 2001-2100 and three different IPCC-based temperature warming scenarios were considered. Our results indicate that: (i) using realistic parameter values, the future potential distributions generated using MIGCLIM can differ significantly (up to more than 95% decrease in colonized surface) from those that ignore dispersal; (ii) this divergence increases both with increasing climate warming and over longer time periods; (iii) the uncertainty associated with the warming scenario can be nearly as large as the one related to dispersal parameters; (iv) accounting for dispersal, even roughly, can importantly reduce uncertainty in projections.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peripheral nerve injury is a serious problem affecting significantly patients' life. Autografts are the "gold standard" used to repair the injury gap, however, only 50% of patients fully recover from the trauma. Artificial conduits are a valid alternative to repairing peripheral nerve. They aim at confining the nerve environment throughout the regeneration process, and providing guidance to axon outgrowth. Biocompatible materials have been carefully designed to reduce inflammation and scar tissue formation, but modifications of the inner lumen are still required in order to optimise the scaffolds. Biomicking the native neural tissue with extracellular matrix fillers or coatings showed great promises in repairing longer gaps and extending cell survival. In addition, extracellular matrix molecules provide a platform to further bind growth factors that can be released in the system over time. Alternatively, conduit fillers can be used for cell transplantation at the injury site, reducing the lag time required for endogenous Schwann cells to proliferate and take part in the regeneration process. This review provides an overview on the importance of extracellular matrix molecules in peripheral nerve repair.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cytoskeleton, composed of actin filaments, intermediate filaments, and microtubules, is a highly dynamic supramolecular network actively involved in many essential biological mechanisms such as cellular structure, transport, movements, differentiation, and signaling. As a first step to characterize the biophysical changes associated with cytoskeleton functions, we have developed finite elements models of the organization of the cell that has allowed us to interpret atomic force microscopy (AFM) data at a higher resolution than that in previous work. Thus, by assuming that living cells behave mechanically as multilayered structures, we have been able to identify superficial and deep effects that could be related to actin and microtubule disassembly, respectively. In Cos-7 cells, actin destabilization with Cytochalasin D induced a decrease of the visco-elasticity close to the membrane surface, while destabilizing microtubules with Nocodazole produced a stiffness decrease only in deeper parts of the cell. In both cases, these effects were reversible. Cell softening was measurable with AFM at concentrations of the destabilizing agents that did not induce detectable effects on the cytoskeleton network when viewing the cells with fluorescent confocal microscopy. All experimental results could be simulated by our models. This technology opens the door to the study of the biophysical properties of signaling domains extending from the cell surface to deeper parts of the cell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of cellular schwannoma as an unusual benign tumor is well established for peripheral nerves but has never been tested in neurosurgical series. In order to test the validity of this concept in cranial nerves and spinal roots we performed an analysis of the clinical and morphological characteristics of 12 cellular and 166 classical benign schwannomas. Immunohistochemical detection of antigen expression in Schwann cells including proliferating cell nuclear antigen (PCNA) was also performed. This study shows that cellular schwannomas in neurosurgical series manifest at a lower age than the classical benign variant and occur mainly in the spinal roots. Mitotic activity and sinusoidal vessels appear more frequently in cellular schwannomas and constitute with high cellularity, the most valuable criteria separating both entities. The postoperative course in both types of tumors was free of metastases or sarcomatous changes. Immunoexpression of S-100 protein, vimentin, epithelial membrane antigen and glial fibrillary acidic protein is not statistically different between the two variants. In contrast, PCNA is more highly expressed in cellular schwannomas. These These results confirm the concept that cellular schwannomas are a clinico-pathological variant of benign schwannomas and provide significant support for the introduction of this entity in neurosurgical oncology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cellular inhibitor of apoptosis (cIAP) proteins, cIAP1 and cIAP2, are important regulators of tumor necrosis factor (TNF) superfamily (SF) signaling and are amplified in a number of tumor types. They are targeted by IAP antagonist compounds that are undergoing clinical trials. IAP antagonist compounds trigger cIAP autoubiquitylation and degradation. The TNFSF member TWEAK induces lysosomal degradation of TRAF2 and cIAPs, leading to elevated NIK levels and activation of non-canonical NF-kappaB. To investigate the role of the ubiquitin ligase RING domain of cIAP1 in these pathways, we used cIAP-deleted cells reconstituted with cIAP1 point mutants designed to interfere with the ability of the RING to dimerize or to interact with E2 enzymes. We show that RING dimerization and E2 binding are required for IAP antagonists to induce cIAP1 degradation and protect cells from TNF-induced cell death. The RING functions of cIAP1 are required for full TNF-induced activation of NF-kappaB, however, delayed activation of NF-kappaB still occurs in cIAP1 and -2 double knock-out cells. The RING functions of cIAP1 are also required to prevent constitutive activation of non-canonical NF-kappaB by targeting NIK for proteasomal degradation. However, in cIAP double knock-out cells TWEAK was still able to increase NIK levels demonstrating that NIK can be regulated by cIAP-independent pathways. Finally we show that, unlike IAP antagonists, TWEAK was able to induce degradation of cIAP1 RING mutants. These results emphasize the critical importance of the RING of cIAP1 in many signaling scenarios, but also demonstrate that in some pathways RING functions are not required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

T-cell vaccination may prevent or treat cancer and infectious diseases, but further progress is required to increase clinical efficacy. Step-by-step improvements of T-cell vaccination in phase I/II clinical studies combined with very detailed analysis of T-cell responses at the single cell level are the strategy of choice for the identification of the most promising vaccine candidates for testing in subsequent large-scale phase III clinical trials. Major aims are to fully identify the most efficient T-cells in anticancer therapy, to characterize their TCRs, and to pinpoint the mechanisms of T-cell recruitment and function in well-defined clinical situations. Here we discuss novel strategies for the assessment of human T-cell responses, revealing in part unprecedented insight into T-cell biology and novel structural principles that govern TCR-pMHC recognition. Together, the described approaches advance our knowledge of T-cell mediated-protection from human diseases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary : Internal ribosome entry sites (IRES) are used by viruses as a strategy to bypass inhibition of cap-dependent translation that commonly results from viral infection. IRES are also used in eukaryotic cells to control mRNA translation under conditions of cellular stress (apoptosis, heat shock) or during the G2 phase of the cell cycle when general protein synthesis is inhibited. Variation in cellular expression levels has been shown to be inherited. Expression is controlled, among others, by transcriptional factors and by the efficiency of cap-mediated translation and ribosome activity. We aimed at identifying genomic determinants of variability in IRES-mediated translation of two representative IRES [Encephalomyocarditis virus (EMCV) and X-linked Inhibitor-of-Apoptosis (XIAP) IRES]. We used bicistronic lentiviral constructions expressing two fluorescent reporter transgenes. Lentiviruses were used to transduce seven different laboratory cell lines and B lymphoblastoid cell lines from the Centre d'Etude du Polymorphisme Humain (CEPH; 15 pedigrees; n=209); representing an in vitro approach to family structure allowing genome scan analyses. The relative expression of the two markers was assessed by FACS. IRES efficiency varies according to cellular background, but also varies, for a same cell type, among individuals. The control of IRES activity presents an inherited component (h2) of 0.47 and 0.36 for EMCV and XIAP IRES, respectively. A genome scan identified a suggestive Quantitative Trait Loci (LOD 2.35) involved in the control of XIAP IRES activity. Résumé : Les sites internes d'entrée des ribosomes (IRES = internal ribosome entry sites) sont utilisés par les virus comme une stratégie afin d'outrepasser l'inhibition de traduction qui résulte communément d'une infection virale. Les IRES sont également utilisés par les cellules eucaryotes pour contrôler la traduction de l'ARN messager dans des conditions de stress cellulaire (apoptose, choc thermique) ou durant la phase G2 du cycle cellulaire, situations durant lesquelles la synthèse générale des protéines est inhibée. La variation des niveaux d'expression cellulaire de transcription est un caractère héréditaire. L'expression des gènes est contrôlée entre autre par les facteurs de transcription et par l'efficacité de la traduction initiée par la coiffe ainsi que par l'activité des ribosomes. Durant cette étude nous avons eu pour but d'identifier les déterminants génomiques responsables de la variabilité de la traduction contrôlée par l'IRES. Ceci a été effectué en étudiant deux IRES représentatifs : l'IRES du virus de l'encéphalomyocardite (EMCV) et l'IRES de l'inhibiteur de l'apoptose XIAP (X-linked Inhibitor-of-Apoptosis). Nous avons utilisés des lentivirus délivrant un transgène bicistronique codant pour deux gènes rapporteurs fluorescents. Ces lentivirus ont été utilisés pour transduire sept différentes lignées cellulaires de laboratoire et des lignées cellulaires lymphoblastoïdes B du Centre d'Etude du Polymorphisme Humain (CEPH; 15 pedigrees; n=209) qui représentent une approche in vitro de la structure familiale et qui permettent des analyses par balayage du génome. L'expression relative des deux marqueurs fluorescents a été analysée par FACS. Nos résultats montrent que l'efficacité des IRES varie en fonction du type de cellules. Il varie aussi, pour le même type de cellules, selon les individus. Le contrôle de l'activité de l'IRES est un caractère héritable (héritabilité h2) de 0.47 et 0.36 pour les IRES de EMCV et XIAP respectivement. Le balayage du génome a permis l'identification d'un locus à effets quantitatifs [QTL Quantitative Trait Loci (LOD 2.35)] impliqué dans le contôle de l'activité de l'IRES de XIAP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Continuing developments in science and technology mean that the amounts of information forensic scientists are able to provide for criminal investigations is ever increasing. The commensurate increase in complexity creates difficulties for scientists and lawyers with regard to evaluation and interpretation, notably with respect to issues of inference and decision. Probability theory, implemented through graphical methods, and specifically Bayesian networks, provides powerful methods to deal with this complexity. Extensions of these methods to elements of decision theory provide further support and assistance to the judicial system. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian decision networks for the evaluation and interpretation of scientific findings in forensic science, and for the support of decision-makers in their scientific and legal tasks. Includes self-contained introductions to probability and decision theory. Develops the characteristics of Bayesian networks, object-oriented Bayesian networks and their extension to decision models. Features implementation of the methodology with reference to commercial and academically available software. Presents standard networks and their extensions that can be easily implemented and that can assist in the reader's own analysis of real cases. Provides a technique for structuring problems and organizing data based on methods and principles of scientific reasoning. Contains a method for the construction of coherent and defensible arguments for the analysis and evaluation of scientific findings and for decisions based on them. Is written in a lucid style, suitable for forensic scientists and lawyers with minimal mathematical background. Includes a foreword by Ian Evett. The clear and accessible style of this second edition makes this book ideal for all forensic scientists, applied statisticians and graduate students wishing to evaluate forensic findings from the perspective of probability and decision analysis. It will also appeal to lawyers and other scientists and professionals interested in the evaluation and interpretation of forensic findings, including decision making based on scientific information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forensic scientists working in 12 state or private laboratories participated in collaborative tests to improve the reliability of the presentation of DNA data at trial. These tests were motivated in response to the growing criticism of the power of DNA evidence. The experts' conclusions in the tests are presented and discussed in the context of the Bayesian approach to interpretation. The use of a Bayesian approach and subjective probabilities in trace evaluation permits, in an easy and intuitive manner, the integration into the decision procedure of any revision of the measure of uncertainty in the light of new information. Such an integration is especially useful with forensic evidence. Furthermore, we believe that this probabilistic model is a useful tool (a) to assist scientists in the assessment of the value of scientific evidence, (b) to help jurists in the interpretation of judicial facts and (c) to clarify the respective roles of scientists and of members of the court. Respondents to the survey were reluctant to apply this methodology in the assessment of DNA evidence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.