12 resultados para Fixed point theory

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mitochondrial (M) and lipid droplet (L) volume density (vd) are often used in exercise research. Vd is the volume of muscle occupied by M and L. The means of calculating these percents are accomplished by applying a grid to a 2D image taken with transmission electron microscopy; however, it is not known which grid best predicts these values. PURPOSE: To determine the grid with the least variability of Mvd and Lvd in human skeletal muscle. METHODS: Muscle biopsies were taken from vastus lateralis of 10 healthy adults, trained (N=6) and untrained (N=4). Samples of 5-10mg were fixed in 2.5% glutaraldehyde and embedded in EPON. Longitudinal sections of 60 nm were cut and 20 images were taken at random at 33,000x magnification. Vd was calculated as the number of times M or L touched two intersecting grid lines (called a point) divided by the total number of points using 3 different sizes of grids with squares of 1000x1000nm sides (corresponding to 1µm2), 500x500nm (0.25µm2) and 250x250nm (0.0625µm2). Statistics included coefficient of variation (CV), 1 way-BS ANOVA and spearman correlations. RESULTS: Mean age was 67 ± 4 yo, mean VO2peak 2.29 ± 0.70 L/min and mean BMI 25.1 ± 3.7 kg/m2. Mean Mvd was 6.39% ± 0.71 for the 1000nm squares, 6.01% ± 0.70 for the 500nm and 6.37% ± 0.80 for the 250nm. Lvd was 1.28% ± 0.03 for the 1000nm, 1.41% ± 0.02 for the 500nm and 1.38% ± 0.02 for the 250nm. The mean CV of the three grids was 6.65% ±1.15 for Mvd with no significant differences between grids (P>0.05). Mean CV for Lvd was 13.83% ± 3.51, with a significant difference between the 1000nm squares and the two other grids (P<0.05). The 500nm squares grid showed the least variability between subjects. Mvd showed a positive correlation with VO2peak (r = 0.89, p < 0.05) but not with weight, height, or age. No correlations were found with Lvd. CONCLUSION: Different size grids have different variability in assessing skeletal muscle Mvd and Lvd. The grid size of 500x500nm (240 points) was more reliable than 1000x1000nm (56 points). 250x250nm (1023 points) did not show better reliability compared with the 500x500nm, but was more time consuming. Thus, choosing a grid with square size of 500x500nm seems the best option. This is particularly relevant as most grids used in the literature are either 100 points or 400 points without clear information on their square size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to discuss whether children have a capacity for deonticreasoning that is irreducible to mentalizing. The results of two experiments point tothe existence of such non-mentalistic understanding and prediction of the behaviourof others. In Study 1, young children (3- and 4-year-olds) were told different versionsof classic false-belief tasks, some of which were modified by the introduction of a ruleor a regularity. When the task (a standard change of location task) included a rule, theperformance of 3-year-olds, who fail traditional false-belief tasks, significantly improved.In Study 2, 3-year-olds proved to be able to infer a rule from a social situation and touse it in order to predict the behaviour of a character involved in a modified versionof the false-belief task. These studies suggest that rules play a central role in the socialcognition of young children and that deontic reasoning might not necessarily involvemind reading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Variable definitions of outcome (Constant score, Simple Shoulder Test [SST]) have been used to assess outcome after shoulder treatment, although none has been accepted as the universal standard. Physicians lack an objective method to reliably assess the activity of their patients in dynamic conditions. Our purpose was to clinically validate the shoulder kinematic scores given by a portable movement analysis device, using the activities of daily living described in the SST as a reference. The secondary objective was to determine whether this device could be used to document the effectiveness of shoulder treatments (for glenohumeral osteoarthritis and rotator cuff disease) and detect early failures.Methods: A clinical trial including 34 patients and a control group of 31 subjects over an observation period of 1 year was set up. Evaluations were made at baseline and 3, 6, and 12 months after surgery by 2 independent observers. Miniature sensors (3-dimensional gyroscopes and accelerometers) allowed kinematic scores to be computed. They were compared with the regular outcome scores: SST; Disabilities of the Arm, Shoulder and Hand; American Shoulder and Elbow Surgeons; and Constant.Results: Good to excellent correlations (0.61-0.80) were found between kinematics and clinical scores. Significant differences were found at each follow-up in comparison with the baseline status for all the kinematic scores (P < .015). The kinematic scores were able to point out abnormal patient outcomes at the first postoperative follow-up.Conclusion: Kinematic scores add information to the regular outcome tools. They offer an effective way to measure the functional performance of patients with shoulder pathology and have the potential to detect early treatment failures.Level of evidence: Level II, Development of Diagnostic Criteria, Diagnostic Study. (C) 2011 Journal of Shoulder and Elbow Surgery Board of Trustees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After years of reciprocal lack of interest, if not opposition, neuroscience and psychoanalysis are poised for a renewed dialogue. This article discusses some aspects of the Freudian metapsychology and its link with specific biological mechanisms. It highlights in particular how the physiological concept of homeostasis resonates with certain fundamental concepts of psychoanalysis. Similarly, the authors underline how the Freud and Damasio theories of brain functioning display remarkable complementarities, especially through their common reference to Meynert and James. Furthermore, the Freudian theory of drives is discussed in the light of current neurobiological evidences of neural plasticity and trace formation and of their relationships with the processes of homeostasis. The ensuing dynamics between traces and homeostasis opens novel avenues to consider inner life in reference to the establishment of fantasies unique to each subject. The lack of determinism, within a context of determinism, implied by plasticity and reconsolidation participates in the emergence of singularity, the creation of uniqueness and the unpredictable future of the subject. There is a gap in determinism inherent to biology itself. Uniqueness and discontinuity: this should today be the focus of the questions raised in neuroscience. Neuroscience needs to establish the new bases of a "discontinuous" biology. Psychoanalysis can offer to neuroscience the possibility to think of discontinuity. Neuroscience and psychoanalysis meet thus in an unexpected way with regard to discontinuity and this is a new point of convergence between them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.