55 resultados para continuous model theory
em Université de Lausanne, Switzerland
Resumo:
Motivation: Hormone pathway interactions are crucial in shaping plant development, such as synergism between the auxin and brassinosteroid pathways in cell elongation. Both hormone pathways have been characterized in detail, revealing several feedback loops. The complexity of this network, combined with a shortage of kinetic data, renders its quantitative analysis virtually impossible at present.Results: As a first step towards overcoming these obstacles, we analyzed the network using a Boolean logic approach to build models of auxin and brassinosteroid signaling, and their interaction. To compare these discrete dynamic models across conditions, we transformed them into qualitative continuous systems, which predict network component states more accurately and can accommodate kinetic data as they become available. To this end, we developed an extension for the SQUAD software, allowing semi-quantitative analysis of network states. Contrasting the developmental output depending on cell type-specific modulators enabled us to identify a most parsimonious model, which explains initially paradoxical mutant phenotypes and revealed a novel physiological feature.
Resumo:
The Baldwin effect can be observed if phenotypic learning influences the evolutionary fitness of individuals, which can in turn accelerate or decelerate evolutionary change. Evidence for both learning-induced acceleration and deceleration can be found in the literature. Although the results for both outcomes were supported by specific mathematical or simulation models, no general predictions have been achieved so far. Here we propose a general framework to predict whether evolution benefits from learning or not. It is formulated in terms of the gain function, which quantifies the proportional change of fitness due to learning depending on the genotype value. With an inductive proof we show that a positive gain-function derivative implies that learning accelerates evolution, and a negative one implies deceleration under the condition that the population is distributed on a monotonic part of the fitness landscape. We show that the gain-function framework explains the results of several specific simulation models. We also use the gain-function framework to shed some light on the results of a recent biological experiment with fruit flies.
Resumo:
In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, & Kleinbölting, 1991) concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize) than on comparisons between German cities (about which they knew much more) ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge). The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.
Resumo:
BACKGROUND: Urine catecholamines, vanillylmandelic, and homovanillic acid are recognized biomarkers for the diagnosis and follow-up of neuroblastoma. Plasma free (f) and total (t) normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MT) could represent a convenient alternative to those urine markers. The primary objective of this study was to establish pediatric centile charts for plasma metanephrines. Secondarily, we explored their diagnostic performance in 10 patients with neuroblastoma. PROCEDURE: We recruited 191 children (69 females) free of neuroendocrine disease to establish reference intervals for plasma metanephrines, reported as centile curves for a given age and sex based on a parametric method using fractional polynomials models. Urine markers and plasma metanephrines were measured in 10 children with neuroblastoma at diagnosis. Plasma total metanephrines were measured by HPLC with coulometric detection and plasma free metanephrines by tandem LC-MS. RESULTS: We observed a significant age-dependence for tNMN, fNMN, and fMN, and a gender and age-dependence for tMN, fNMN, and fMN. Free MT was below the lower limit of quantification in 94% of the children. All patients with neuroblastoma at diagnosis were above the 97.5th percentile for tMT, tNMN, fNMN, and fMT, whereas their fMN and tMN were mostly within the normal range. As expected, urine assays were inconstantly predictive of the disease. CONCLUSIONS: A continuous model incorporating all data for a given analyte represents an appealing alternative to arbitrary partitioning of reference intervals across age categories. Plasma metanephrines are promising biomarkers for neuroblastoma, and their performances need to be confirmed in a prospective study on a large cohort of patients. Pediatr Blood Cancer 2015;62:587-593. © 2015 Wiley Periodicals, Inc.
Resumo:
In the framework of the classical compound Poisson process in collective risk theory, we study a modification of the horizontal dividend barrier strategy by introducing random observation times at which dividends can be paid and ruin can be observed. This model contains both the continuous-time and the discrete-time risk model as a limit and represents a certain type of bridge between them which still enables the explicit calculation of moments of total discounted dividend payments until ruin. Numerical illustrations for several sets of parameters are given and the effect of random observation times on the performance of the dividend strategy is studied.
Resumo:
Continuous positive airway pressure, aimed at preventing pulmonary atelectasis, has been used for decades to reduce lung injury in critically ill patients. In neonatal practice, it is increasingly used worldwide as a primary form of respiratory support due to its low cost and because it reduces the need for endotracheal intubation and conventional mechanical ventilation. We studied the anesthetized in vivo rat and determined the optimal circuit design for delivery of continuous positive airway pressure. We investigated the effects of continuous positive airway pressure following lipopolysaccharide administration in the anesthetized rat. Whereas neither continuous positive airway pressure nor lipopolysaccharide alone caused lung injury, continuous positive airway pressure applied following intravenous lipopolysaccharide resulted in increased microvascular permeability, elevated cytokine protein and mRNA production, and impaired static compliance. A dose-response relationship was demonstrated whereby higher levels of continuous positive airway pressure (up to 6 cmH(2)O) caused greater lung injury. Lung injury was attenuated by pretreatment with dexamethasone. These data demonstrate that despite optimal circuit design, continuous positive airway pressure causes significant lung injury (proportional to the airway pressure) in the setting of circulating lipopolysaccharide. Although we would currently avoid direct extrapolation of these findings to clinical practice, we believe that in the context of increasing clinical use, these data are grounds for concern and warrant further investigation.
Resumo:
OBJECTIVE: : To determine the influence of nebulizer types and nebulization modes on bronchodilator delivery in a mechanically ventilated pediatric lung model. DESIGN: : In vitro, laboratory study. SETTING: : Research laboratory of a university hospital. INTERVENTIONS: : Using albuterol as a marker, three nebulizer types (jet nebulizer, ultrasonic nebulizer, and vibrating-mesh nebulizer) were tested in three nebulization modes in a nonhumidified bench model mimicking the ventilatory pattern of a 10-kg infant. The amounts of albuterol deposited on the inspiratory filters (inhaled drug) at the end of the endotracheal tube, on the expiratory filters, and remaining in the nebulizers or in the ventilator circuit were determined. Particle size distribution of the nebulizers was also measured. MEASUREMENTS AND MAIN RESULTS: : The inhaled drug was 2.8% ± 0.5% for the jet nebulizer, 10.5% ± 2.3% for the ultrasonic nebulizer, and 5.4% ± 2.7% for the vibrating-mesh nebulizer in intermittent nebulization during the inspiratory phase (p < 0.01). The most efficient nebulizer was the vibrating-mesh nebulizer in continuous nebulization (13.3% ± 4.6%, p < 0.01). Depending on the nebulizers, a variable but important part of albuterol was observed as remaining in the nebulizers (jet and ultrasonic nebulizers), or being expired or lost in the ventilator circuit (all nebulizers). Only small particles (range 2.39-2.70 µm) reached the end of the endotracheal tube. CONCLUSIONS: : Important differences between nebulizer types and nebulization modes were seen for albuterol deposition at the end of the endotracheal tube in an in vitro pediatric ventilator-lung model. New aerosol devices, such as ultrasonic and vibrating-mesh nebulizers, were more efficient than the jet nebulizer.
Resumo:
Animal models of infective endocarditis (IE) induced by high-grade bacteremia revealed the pathogenic roles of Staphylococcus aureus surface adhesins and platelet aggregation in the infection process. In humans, however, S. aureus IE possibly occurs through repeated bouts of low-grade bacteremia from a colonized site or intravenous device. Here we used a rat model of IE induced by continuous low-grade bacteremia to explore further the contributions of S. aureus virulence factors to the initiation of IE. Rats with aortic vegetations were inoculated by continuous intravenous infusion (0.0017 ml/min over 10 h) with 10(6) CFU of Lactococcus lactis pIL253 or a recombinant L. lactis strain expressing an individual S. aureus surface protein (ClfA, FnbpA, BCD, or SdrE) conferring a particular adhesive or platelet aggregation property. Vegetation infection was assessed 24 h later. Plasma was collected at 0, 2, and 6 h postinoculation to quantify the expression of tumor necrosis factor (TNF), interleukin 1α (IL-1α), IL-1β, IL-6, and IL-10. The percentage of vegetation infection relative to that with strain pIL253 (11%) increased when binding to fibrinogen was conferred on L. lactis (ClfA strain) (52%; P = 0.007) and increased further with adhesion to fibronectin (FnbpA strain) (75%; P < 0.001). Expression of fibronectin binding alone was not sufficient to induce IE (BCD strain) (10% of infection). Platelet aggregation increased the risk of vegetation infection (SdrE strain) (30%). Conferring adhesion to fibrinogen and fibronectin favored IL-1β and IL-6 production. Our results, with a model of IE induced by low-grade bacteremia, resembling human disease, extend the essential role of fibrinogen binding in the initiation of S. aureus IE. Triggering of platelet aggregation or an inflammatory response may contribute to or promote the development of IE.
Resumo:
The determination of characteristic cardiac parameters, such as displacement, stress and strain distribution are essential for an understanding of the mechanics of the heart. The calculation of these parameters has been limited until recently by the use of idealised mathematical representations of biventricular geometries and by applying simple material laws. On the basis of 20 short axis heart slices and in consideration of linear and nonlinear material behaviour we have developed a FE model with about 100,000 degrees of freedom. Marching Cubes and Phong's incremental shading technique were used to visualise the three dimensional geometry. In a quasistatic FE analysis continuous distribution of regional stress and strain corresponding to the endsystolic state were calculated. Substantial regional variation of the Von Mises stress and the total strain energy were observed at all levels of the heart model. The results of both the linear elastic model and the model with a nonlinear material description (Mooney-Rivlin) were compared. While the stress distribution and peak stress values were found to be comparable, the displacement vectors obtained with the nonlinear model were generally higher in comparison with the linear elastic case indicating the need to include nonlinear effects.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
The evolution of a quantitative phenotype is often envisioned as a trait substitution sequence where mutant alleles repeatedly replace resident ones. In infinite populations, the invasion fitness of a mutant in this two-allele representation of the evolutionary process is used to characterize features about long-term phenotypic evolution, such as singular points, convergence stability (established from first-order effects of selection), branching points, and evolutionary stability (established from second-order effects of selection). Here, we try to characterize long-term phenotypic evolution in finite populations from this two-allele representation of the evolutionary process. We construct a stochastic model describing evolutionary dynamics at non-rare mutant allele frequency. We then derive stability conditions based on stationary average mutant frequencies in the presence of vanishing mutation rates. We find that the second-order stability condition obtained from second-order effects of selection is identical to convergence stability. Thus, in two-allele systems in finite populations, convergence stability is enough to characterize long-term evolution under the trait substitution sequence assumption. We perform individual-based simulations to confirm our analytic results.
Resumo:
Gifted children develop asynchronously, often advanced for their age cognitively, but at or between their chronological and mental ages socially and emotionally (Robinson, 2008). In order to help gifted children and adolescents develop and practice social and emotional self-regulation skills, we investigated the use of an Adlerian play therapy approach during pen-and-paper role-playing games. Additionally, we used Goffman's (1961, 1974) social role identification and distance to encourage participants to experiment with new identities. Herein, we propose a psychosocial model of interactions during role-playing games based on Goffman's theory and Adlerian play therapy techniques, and suggest that role-playing games are an effective way of intervening with gifted children and adolescents to improve their intra- and interpersonal skills. We specifically targeted intrapersonal skills of exercising creativity, becoming self-aware, and setting individual goals by raising participants' awareness of their privately logical reasons for making decisions and their levels of social interest. We also targeted their needs and means of seeking significance in the group to promote collaboration and interaction skills with other gifted peers through role analysis, embracement, and distancing. We report results from a case study and conclude that role-playing games deserve more attention, both from researchers and clinical practitioners, because they encourage change while improving young clients' social and emotional development.
Resumo:
Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.
Resumo:
Leaders must scan the internal and external environment, chart strategic and task objectives, and provide performance feedback. These instrumental leadership (IL) functions go beyond the motivational and quid-pro quo leader behaviors that comprise the full-range-transformational, transactional, and laissez faire-leadership model. In four studies we examined the construct validity of IL. We found evidence for a four-factor IL model that was highly prototypical of good leadership. IL predicted top-level leader emergence controlling for the full-range factors, initiating structure, and consideration. It also explained unique variance in outcomes beyond the full-range factors; the effects of transformational leadership were vastly overstated when IL was omitted from the model. We discuss the importance of a "fuller full-range" leadership theory for theory and practice. We also showcase our methodological contributions regarding corrections for common method variance (i.e., endogeneity) bias using two-stage least squares (2SLS) regression and Monte Carlo split-sample designs.