184 resultados para agent theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To investigate the ability of inversion recovery ON-resonant water suppression (IRON) in conjunction with P904 (superparamagnetic nanoparticles which consisting of a maghemite core coated with a low-molecular-weight amino-alcohol derivative of glucose) to perform steady-state equilibrium phase MR angiography (MRA) over a wide dose range. MATERIALS AND METHODS: Experiments were approved by the institutional animal care committee. Rabbits (n = 12) were imaged at baseline and serially after the administration of 10 incremental dosages of 0.57-5.7 mgFe/Kg P904. Conventional T1-weighted and IRON MRA were obtained on a clinical 1.5 Tesla (T) scanner to image the thoracic and abdominal aorta, and peripheral vessels. Contrast-to-noise ratios (CNR) and vessel sharpness were quantified. RESULTS: Using IRON MRA, CNR and vessel sharpness progressively increased with incremental dosages of the contrast agent P904, exhibiting constantly higher contrast values than T1 -weighted MRA over a very wide range of contrast agent doses (CNR of 18.8 ± 5.6 for IRON versus 11.1 ± 2.8 for T1 -weighted MRA at 1.71 mgFe/kg, P = 0.02 and 19.8 ± 5.9 for IRON versus -0.8 ± 1.4 for T1-weighted MRA at 3.99 mgFe/kg, P = 0.0002). Similar results were obtained for vessel sharpness in peripheral vessels, (Vessel sharpness of 46.76 ± 6.48% for IRON versus 33.20 ± 3.53% for T1-weighted MRA at 1.71 mgFe/Kg, P = 0.002, and of 48.66 ± 5.50% for IRON versus 19.00 ± 7.41% for T1-weighted MRA at 3.99 mgFe/Kg, P = 0.003). CONCLUSION: Our study suggests that quantitative CNR and vessel sharpness after the injection of P904 are consistently higher for IRON MRA when compared with conventional T1-weighted MRA. These findings apply for a wide range of contrast agent dosages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arising from M. A. Nowak, C. E. Tarnita & E. O. Wilson 466, 1057-1062 (2010); Nowak et al. reply. Nowak et al. argue that inclusive fitness theory has been of little value in explaining the natural world, and that it has led to negligible progress in explaining the evolution of eusociality. However, we believe that their arguments are based upon a misunderstanding of evolutionary theory and a misrepresentation of the empirical literature. We will focus our comments on three general issues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The antimicrobial metabolite 2,4-diacetylphloroglucinol (2,4-DAPG) contributes to the capacity of Pseudomonas fluorescens strain CHA0 to control plant diseases caused by soilborne pathogens. A 2, 4-DAPG-negative Tn5 insertion mutant of strain CHA0 was isolated, and the nucleotide sequence of the 4-kb genomic DNA region adjacent to the Tn5 insertion site was determined. Four open reading frames were identified, two of which were homologous to phlA, the first gene of the 2,4-DAPG biosynthetic operon, and to the phlF gene encoding a pathway-specific transcriptional repressor. The Tn5 insertion was located in an open reading frame, tentatively named phlH, which is not related to known phl genes. In wild-type CHA0, 2, 4-DAPG production paralleled expression of a phlA'-'lacZ translational fusion, reaching a maximum in the late exponential growth phase. Thereafter, the compound appeared to be degraded to monoacetylphloroglucinol by the bacterium. 2,4-DAPG was identified as the active compound in extracts from culture supernatants of strain CHA0 specifically inducing phlA'-'lacZ expression about sixfold during exponential growth. Induction by exogenous 2,4-DAPG was most conspicuous in a phlA mutant, which was unable to produce 2, 4-DAPG. In a phlF mutant, 2,4-DAPG production was enhanced severalfold and phlA'-'lacZ was expressed at a level corresponding to that in the wild type with 2,4-DAPG added. The phlF mutant was insensitive to 2,4-DAPG addition. A transcriptional phlA-lacZ fusion was used to demonstrate that the repressor PhlF acts at the level of transcription. Expression of phlA'-'lacZ and 2,4-DAPG synthesis in strain CHA0 was strongly repressed by the bacterial extracellular metabolites salicylate and pyoluteorin as well as by fusaric acid, a toxin produced by the pythopathogenic fungus Fusarium. In the phlF mutant, these compounds did not affect phlA'-'lacZ expression and 2, 4-DAPG production. PhlF-mediated induction by 2,4-DAPG and repression by salicylate of phlA'-'lacZ expression was confirmed by using Escherichia coli as a heterologous host. In conclusion, our results show that autoinduction of 2,4-DAPG biosynthesis can be countered by certain bacterial (and fungal) metabolites. This mechanism, which depends on phlF function, may help P. fluorescens to produce homeostatically balanced amounts of extracellular metabolites.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to discuss whether children have a capacity for deonticreasoning that is irreducible to mentalizing. The results of two experiments point tothe existence of such non-mentalistic understanding and prediction of the behaviourof others. In Study 1, young children (3- and 4-year-olds) were told different versionsof classic false-belief tasks, some of which were modified by the introduction of a ruleor a regularity. When the task (a standard change of location task) included a rule, theperformance of 3-year-olds, who fail traditional false-belief tasks, significantly improved.In Study 2, 3-year-olds proved to be able to infer a rule from a social situation and touse it in order to predict the behaviour of a character involved in a modified versionof the false-belief task. These studies suggest that rules play a central role in the socialcognition of young children and that deontic reasoning might not necessarily involvemind reading.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the efficiency of recursive evaluations with applications to risk theoryCette thèse est composée de trois essais qui portent sur l'efficacité des évaluations récursives de la distribution du montant total des sinistres d'un portefeuille de polices d'assurance au cours d'un période donnée. Le calcul de sa fonction de probabilité ou de quantités liées à cette distribution apparaît fréquemment dans la plupart des domaines de la pratique actuarielle.C'est le cas notamment pour le calcul du capital de solvabilité en Suisse ou pour modéliser la perte d'une assurance vie au cours d'une année. Le principal problème des évaluations récursives est que la propagation des erreurs provenant de la représentation des nombres réels par l'ordinateur peut être désastreuse. Mais, le gain de temps qu'elles procurent en réduisant le nombre d'opérations arithmétiques est substantiel par rapport à d'autres méthodes.Dans le premier essai, nous utilisons certaines propriétés d'un outil informatique performant afin d'optimiser le temps de calcul tout en garantissant une certaine qualité dans les résultats par rapport à la propagation de ces erreurs au cours de l'évaluation.Dans le second essai, nous dérivons des expressions exactes et des bornes pour les erreurs qui se produisent dans les fonctions de distribution cumulatives d'un ordre donné lorsque celles-ci sont évaluées récursivement à partir d'une approximation de la transformée de De Pril associée. Ces fonctions cumulatives permettent de calculer directement certaines quantités essentielles comme les primes stop-loss.Finalement, dans le troisième essai, nous étudions la stabilité des évaluations récursives de ces fonctions cumulatives par rapport à la propagation des erreurs citées ci-dessus et déterminons la précision nécessaire dans la représentation des nombres réels afin de garantir des résultats satisfaisants. Cette précision dépend en grande partie de la transformée de De Pril associée.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TWEAK, a TNF family ligand with pleiotropic cellular functions, was originally described as capable of inducing tumor cell death in vitro. TWEAK functions by binding its receptor, Fn14, which is up-regulated on many human solid tumors. Herein, we show that intratumoral administration of TWEAK, delivered either by an adenoviral vector or in an immunoglobulin Fc-fusion form, results in significant inhibition of tumor growth in a breast xenograft model. To exploit the TWEAK-Fn14 pathway as a therapeutic target in oncology, we developed an anti-Fn14 agonistic antibody, BIIB036. Studies described herein show that BIIB036 binds specifically to Fn14 but not other members of the TNF receptor family, induces Fn14 signaling, and promotes tumor cell apoptosis in vitro. In vivo, BIIB036 effectively inhibits growth of tumors in multiple xenograft models, including colon (WiDr), breast (MDA-MB-231), and gastric (NCI-N87) tumors, regardless of tumor cell growth inhibition response observed to BIIB036 in vitro. The anti-tumor activity in these cell lines is not TNF-dependent. Increasing the antigen-binding valency of BIB036 significantly enhances its anti-tumor effect, suggesting the contribution of higher order cross-linking of the Fn14 receptor. Full Fc effector function is required for maximal activity of BIIB036 in vivo, likely due to the cross-linking effect and/or ADCC mediated tumor killing activity. Taken together, the anti-tumor properties of BIIB036 validate Fn14 as a promising target in oncology and demonstrate its potential therapeutic utility in multiple solid tumor indications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: The current study tested the applicability of Jessor's problem behavior theory (PBT) in national probability samples from Georgia and Switzerland. Comparisons focused on (1) the applicability of the problem behavior syndrome (PBS) in both developmental contexts, and (2) on the applicability of employing a set of theory-driven risk and protective factors in the prediction of problem behaviors. METHODS: School-based questionnaire data were collected from n = 18,239 adolescents in Georgia (n = 9499) and Switzerland (n = 8740) following the same protocol. Participants rated five measures of problem behaviors (alcohol and drug use, problems because of alcohol and drug use, and deviance), three risk factors (future uncertainty, depression, and stress), and three protective factors (family, peer, and school attachment). Final study samples included n = 9043 Georgian youth (mean age = 15.57; 58.8% females) and n = 8348 Swiss youth (mean age = 17.95; 48.5% females). Data analyses were completed using structural equation modeling, path analyses, and post hoc z-tests for comparisons of regression coefficients. RESULTS: Findings indicated that the PBS replicated in both samples, and that theory-driven risk and protective factors accounted for 13% and 10% in Georgian and Swiss samples, respectively in the PBS, net the effects by demographic variables. Follow-up z-tests provided evidence of some differences in the magnitude, but not direction, in five of six individual paths by country. CONCLUSION: PBT and the PBS find empirical support in these Eurasian and Western European samples; thus, Jessor's theory holds value and promise in understanding the etiology of adolescent problem behaviors outside of the United States.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.