838 resultados para Preference Functions
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Stabilizing selection is a fundamental concept in evolutionary biology. In the presence of a single intermediate optimum phenotype (fitness peak) on the fitness surface, stabilizing selection should cause the population to evolve toward such a peak. This prediction has seldom been tested, particularly for suites of correlated traits. The lack of tests for an evolutionary match between population means and adaptive peaks may be due, at least in part, to problems associated with empirically detecting multivariate stabilizing selection and with testing whether population means are at the peak of multivariate fitness surfaces. Here we show how canonical analysis of the fitness surface, combined with the estimation of confidence regions for stationary points on quadratic response surfaces, may be used to define multivariate stabilizing selection on a suite of traits and to establish whether natural populations reside on the multivariate peak. We manufactured artificial advertisement calls of the male cricket Teleogryllus commodus and played them back to females in laboratory phonotaxis trials to estimate the linear and nonlinear sexual selection that female phonotactic choice imposes on male call structure. Significant nonlinear selection on the major axes of the fitness surface was convex in nature and displayed an intermediate optimum, indicating multivariate stabilizing selection. The mean phenotypes of four independent samples of males, from the same population as the females used in phonotaxis trials, were within the 95% confidence region for the fitness peak. These experiments indicate that stabilizing sexual selection may play an important role in the evolution of male call properties in natural populations of T. commodus.
Resumo:
This paper proposes an exploration of the methodology of utilityfunctions that distinguishes interpretation from representation. Whilerepresentation univocally assigns numbers to the entities of the domainof utility functions, interpretation relates these entities withempirically observable objects of choice. This allows us to makeexplicit the standard interpretation of utility functions which assumesthat two objects have the same utility if and only if the individual isindifferent among them. We explore the underlying assumptions of suchan hypothesis and propose a non-standard interpretation according towhich objects of choice have a well-defined utility although individualsmay vary in the way they treat these objects in a specific context.We provide examples of such a methodological approach that may explainsome reversal of preferences and suggest possible mathematicalformulations for further research.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Landscape narrative, combining landscape and narrative, has been employed to create storytelling layouts and interpretive information in some famous botanic gardens. In order to assess the educational effectiveness of using "landscape narrative" in landscape design, the Heng-Chun Tropical Botanical Garden in Taiwan was chosen as research target for an empirical study. Based on cognitive theory and the affective responses of environmental psychology, computer simulations and video recordings were used to create five themed display areas with landscape narrative elements. Two groups of pupils watched simulated films. The pupils were then given an evaluation test and questionnaire, to determine the effectiveness of the landscape narrative. When the content was well associated and matched with the narrative landscape, the comprehension and retention of content was increased significantly. The results also indicated that visual preference of narrative landscape scenes was increased. This empirical study can be regarded as a successful model of integrating landscape narrative and interpretation practice that can be applied to the design of new theme displays in botanic gardens to improve both the effectiveness of interpretation plans and the visual preference of visitors. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Why don't agents cooperate when they both stand to gain? This question ranks among the most fundamental in the social sciences. Explanations abound. Among the most compelling are various configurations of the prisonerís dilemma (PD), or public goods problem. Payoffs in PDís are specified in one of two ways: as primitive cardinal payoffs or as ordinal final utility. However, as final utility is objectively unobservable, only the primitive payoff games are ever observed. This paper explores mappings from primitive payoff to utility payoff games and demonstrates that though an observable game is a PD there are broad classes of utility functions for which there exists no associated utility PD. In particular we show that even small amounts of either altruism or jealousy may disrupt the mapping from primitive payoff to utility PD. We then examine some implications of these results ñ including the possibility of conflict inducing growth.
Resumo:
The vertebrate thyroid system is important for multiple developmental processes, including eye development. Thus, its environmentally induced disruption may impact important fitness-related parameters like visual capacities and behaviour. The present study investigated the relation between molecular effects of thyroid disruption and morphological and physiological changes of eye development in zebrafish (Danio rerio). Two test compounds representing different molecular modes of thyroid disruption were used: propylthiouracil (PTU), which is an enzyme-inhibitor of thyroid hormone synthesis, and tetrabromobisphenol A (TBBPA), which interacts with the thyroid hormone receptors. Both chemicals significantly altered transcript levels of thyroid system-related genes (TRα, TRβ, TPO, TSH, DIO1, DIO2 and DIO3) in a compound-specific way. Despite these different molecular response patterns, both treatments resulted in similar pathological alterations of the eyes such as reduced size, RPE cell diameter and pigmentation, which were concentration-dependent. The morphological changes translated into impaired visual performance of the larvae: the optokinetic response was significantly and concentration-dependently decreased in both treatments, together with a significant increase of light preference of PTU-treated larvae. In addition, swimming activity was impacted. This study provides first evidence that different modes of molecular action of the thyroid disruptors can be associated with uniform apical responses. Furthermore, this study is the first to show that pathological eye development, as it can be induced by exposure to thyroid disruptors, indeed translates into impaired visual capacities of zebrafish early life stages.
Resumo:
We propose a new method for ranking alternatives in multicriteria decision-making problems when there is imprecision concerning the alternative performances, component utility functions and weights. We assume decision maker?s preferences are represented by an additive multiattribute utility function, in which weights can be modeled by independent normal variables, fuzzy numbers, value intervals or by an ordinal relation. The approaches are based on dominance measures or exploring the weight space in order to describe which ratings would make each alternative the preferred one. On the one hand, the approaches based on dominance measures compute the minimum utility difference among pairs of alternatives. Then, they compute a measure by which to rank the alternatives. On the other hand, the approaches based on exploring the weight space compute confidence factors describing the reliability of the analysis. These methods are compared using Monte Carlo simulation.
Resumo:
This note shows that, under appropriate conditions, preferences may be locally approximated by the linear utility or risk-neutral preference functional associated with a local probability transformation.
Resumo:
In this paper, we consider Preference Inference based on a generalised form of Pareto order. Preference Inference aims at reasoning over an incomplete specification of user preferences. We focus on two problems. The Preference Deduction Problem (PDP) asks if another preference statement can be deduced (with certainty) from a set of given preference statements. The Preference Consistency Problem (PCP) asks if a set of given preference statements is consistent, i.e., the statements are not contradicting each other. Here, preference statements are direct comparisons between alternatives (strict and non-strict). It is assumed that a set of evaluation functions is known by which all alternatives can be rated. We consider Pareto models which induce order relations on the set of alternatives in a Pareto manner, i.e., one alternative is preferred to another only if it is preferred on every component of the model. We describe characterisations for deduction and consistency based on an analysis of the set of evaluation functions, and present algorithmic solutions and complexity results for PDP and PCP, based on Pareto models in general and for a special case. Furthermore, a comparison shows that the inference based on Pareto models is less cautious than some other types of well-known preference model.
Resumo:
Purpose. To determine the mechanisms predisposing penile fracture as well as the rate of long-term penile deformity and erectile and voiding functions. Methods. All fractures were repaired on an emergency basis via subcoronal incision and absorbable suture with simultaneous repair of eventual urethral lesion. Patients' status before fracture and voiding and erectile functions at long term were assessed by periodic follow-up and phone call. Detailed history included cause, symptoms, and single-question self-report of erectile and voiding functions. Results. Among the 44 suspicious cases, 42 (95.4%) were confirmed, mean age was 34.5 years (range: 18-60), mean follow-up 59.3 months (range 9-155). Half presented the classical triad of audible crack, detumescence, and pain. Heterosexual intercourse was the most common cause (28 patients, 66.7%), followed by penile manipulation (6 patients, 14.3%), and homosexual intercourse (4 patients, 9.5%). Woman on top was the most common heterosexual position (n = 14, 50%), followed by doggy style (n = 8, 28.6%). Four patients (9.5%) maintained the cause unclear. Six (14.3%) patients had urethral injury and two (4.8%) had erectile dysfunction, treated by penile prosthesis and PDE-5i. No patient showed urethral fistula, voiding deterioration, penile nodule/curve or pain. Conclusions. Woman on top was the potentially riskiest sexual position (50%). Immediate surgical treatment warrants long-term very low morbidity.
Resumo:
Streptococcus sanguinis is a commensal pioneer colonizer of teeth and an opportunistic pathogen of infectious endocarditis. The establishment of S. sanguinis in host sites likely requires dynamic fitting of the cell wall in response to local stimuli. In this study, we investigated the two-component system (TCS) VicRK in S. sanguinis (VicRKSs), which regulates genes of cell wall biogenesis, biofilm formation, and virulence in opportunistic pathogens. A vicK knockout mutant obtained from strain SK36 (SKvic) showed slight reductions in aerobic growth and resistance to oxidative stress but an impaired ability to form biofilms, a phenotype restored in the complemented mutant. The biofilm-defective phenotype was associated with reduced amounts of extracellular DNA during aerobic growth, with reduced production of H2O2, a metabolic product associated with DNA release, and with inhibitory capacity of S. sanguinis competitor species. No changes in autolysis or cell surface hydrophobicity were detected in SKvic. Reverse transcription-quantitative PCR (RT-qPCR), electrophoretic mobility shift assays (EMSA), and promoter sequence analyses revealed that VicR directly regulates genes encoding murein hydrolases (SSA_0094, cwdP, and gbpB) and spxB, which encodes pyruvate oxidase for H2O2 production. Genes previously associated with spxB expression (spxR, ccpA, ackA, and tpK) were not transcriptionally affected in SKvic. RT-qPCR analyses of S. sanguinis biofilm cells further showed upregulation of VicRK targets (spxB, gbpB, and SSA_0094) and other genes for biofilm formation (gtfP and comE) compared to expression in planktonic cells. This study provides evidence that VicRKSs regulates functions crucial for S. sanguinis establishment in biofilms and identifies novel VicRK targets potentially involved in hydrolytic activities of the cell wall required for these functions.
Resumo:
Dulce de leche samples available in the Brazilian market were submitted to sensory profiling by quantitative descriptive analysis and acceptance test, as well sensory evaluation using the just-about-right scale and purchase intent. External preference mapping and the ideal sensory characteristics of dulce de leche were determined. The results were also evaluated by principal component analysis, hierarchical cluster analysis, partial least squares regression, artificial neural networks, and logistic regression. Overall, significant product acceptance was related to intermediate scores of the sensory attributes in the descriptive test, and this trend was observed even after consumer segmentation. The results obtained by sensometric techniques showed that optimizing an ideal dulce de leche from the sensory standpoint is a multidimensional process, with necessary adjustments on the appearance, aroma, taste, and texture attributes of the product for better consumer acceptance and purchase. The optimum dulce de leche was characterized by high scores for the attributes sweet taste, caramel taste, brightness, color, and caramel aroma in accordance with the preference mapping findings. In industrial terms, this means changing the parameters used in the thermal treatment and quantitative changes in the ingredients used in formulations.