990 resultados para Utility functions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The central motif of this work is prediction and optimization in presence of multiple interacting intelligent agents. We use the phrase `intelligent agents' to imply in some sense, a `bounded rationality', the exact meaning of which varies depending on the setting. Our agents may not be `rational' in the classical game theoretic sense, in that they don't always optimize a global objective. Rather, they rely on heuristics, as is natural for human agents or even software agents operating in the real-world. Within this broad framework we study the problem of influence maximization in social networks where behavior of agents is myopic, but complication stems from the structure of interaction networks. In this setting, we generalize two well-known models and give new algorithms and hardness results for our models. Then we move on to models where the agents reason strategically but are faced with considerable uncertainty. For such games, we give a new solution concept and analyze a real-world game using out techniques. Finally, the richest model we consider is that of Network Cournot Competition which deals with strategic resource allocation in hypergraphs, where agents reason strategically and their interaction is specified indirectly via player's utility functions. For this model, we give the first equilibrium computability results. In all of the above problems, we assume that payoffs for the agents are known. However, for real-world games, getting the payoffs can be quite challenging. To this end, we also study the inverse problem of inferring payoffs, given game history. We propose and evaluate a data analytic framework and we show that it is fast and performant.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We provide a nonparametric 'revealed preference’ characterization of rational household behavior in terms of the collective consumption model, while accounting for general (possibly non-convex) individual preferences. We establish a Collective Axiom of Revealed Preference (CARP), which provides a necessary and sufficient condition for data consistency with collective rationality. Our main result takes the form of a ‘collective’ version of the Afriat Theorem for rational behavior in terms of the unitary model. This theorem has some interesting implications. With only a finite set of observations, the nature of consumption externalities (positive or negative) in the intra-household allocation process is non-testable. The same non-testability conclusion holds for privateness (with or without externalities) or publicness of consumption. By contrast, concavity of individual utility functions (representing convex preferences) turns out to be testable. In addition, monotonicity is testable for the model that assumes all household consumption is public.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the role of index bonds in a dynamic consumption and asset allocation model where the rate of real consumption at any given time cannot fall below a fixed level. An explicit form of the optimal consumption and portfolio rule for a class of Constant Relative Risk Aversion (CRRA) utility functions is derived. Consumption increases above the subsistence level only when wealth exceeds a threshold value. Risky investments in equity and nominal bonds are initially proportional to the excess of wealth over a lower bound, and then increase nonlinearly with wealth. The desirability of investing in the risky assets are related to the agent’s risk preference, the equity premium, and the inflation risk premium. The demand for index bonds is also obtained. The results should be useful for the management of defined benefit pension funds, university endowments, and other portfolios which have a withdrawal pre-commitment in real terms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, mass spectrometry-based metabolomics studies extend beyond conventional chemical categorization and metabolic phenotype analysis to understanding gene function in various biological contexts (e.g., mammalian, plant, and microbial). These novel utilities have led to many innovative discoveries in the following areas: disease pathogenesis, therapeutic pathway or target identification, the biochemistry of animal and plant physiological and pathological activities in response to diverse stimuli, and molecular signatures of host-pathogen interactions during microbial infection. In this review, we critically evaluate the representative applications of mass spectrometry-based metabolomics to better understand gene function in diverse biological contexts, with special emphasis on working principles, study protocols, and possible future development of this technique. Collectively, this review raises awareness within the biomedical community of the scientific value and applicability of mass spectrometry-based metabolomics strategies to better understand gene function, thus advancing this application's utility in a broad range of biological fields

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancer is the second leading cause of death with 14 million new cases and 8.2 million cancer-related deaths worldwide in 2012. Despite the progress made in cancer therapies, neoplastic diseases are still a major therapeutic challenge notably because of intra- and inter-malignant tumour heterogeneity and adaptation/escape of malignant cells to/from treatment. New targeted therapies need to be developed to improve our medical arsenal and counter-act cancer progression. Human kallikrein-related peptidases (KLKs) are secreted serine peptidases which are aberrantly expressed in many cancers and have great potential in developing targeted therapies. The potential of KLKs as cancer biomarkers is well established since the demonstration of the association between KLK3/PSA (prostate specific antigen) levels and prostate cancer progression. In addition, a constantly increasing number of in vitro and in vivo studies demonstrate the functional involvement of KLKs in cancer-related processes. These peptidases are now considered key players in the regulation of cancer cell growth, migration, invasion, chemo-resistance, and importantly, in mediating interactions between cancer cells and other cell populations found in the tumour microenvironment to facilitate cancer progression. These functional roles of KLKs in a cancer context further highlight their potential in designing new anti-cancer approaches. In this review, we comprehensively review the biochemical features of KLKs, their functional roles in carcinogenesis, followed by the latest developments and the successful utility of KLK-based therapeutics in counteracting cancer progression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multitasking among three or more different tasks is a ubiquitous requirement of everyday cognition, yet rarely is it addressed in research on healthy adults who have had no specific training in multitasking skills. Participants completed a set of diverse subtasks within a simulated shopping mall and office environment, the Edinburgh Virtual Errands Test (EVET). The aim was to investigate how different cognitive functions, such as planning, retrospective and prospective memory, and visuospatial and verbal working memory, contribute to everyday multitasking. Subtasks were chosen to be diverse, and predictions were derived from a statistical model of everyday multitasking impairments associated with frontal-lobe lesions (Burgess, Veitch, de Lacy Costello, & Shallice, 2000b). Multiple regression indicated significant independent contributions from measures of retrospective memory, visuospatial working memory, and online planning, but not from independent measures of prospective memory or verbal working memory. Structural equation modelling showed that the best fit to the data arose from three underlying constructs, with Memory and Planning having a weak link, but with both having a strong directional pathway to an Intent construct that reflected implementation of intentions. Participants who followed their preprepared plan achieved higher scores than those who altered their plan during multitask performance. This was true regardless of whether the plan was efficient or poor. These results substantially develop and extend the Burgess et al. (2000b) model to healthy adults and yield new insight into the poorly understood area of everyday multitasking. The findings also point to the utility of using virtual environments for investigating this form of complex human cognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economic theory distinguishes two concepts of utility: decision utility, objectively quantifiable by choices, and experienced utility, referring to the satisfaction by an obtainment. To date, experienced utility is typically measured with subjective ratings. This study intended to quantify experienced utility by global levels of neuronal activity. Neuronal activity was measured by means of electroencephalographic (EEG) responses to gain and omission of graded monetary rewards at the level of the EEG topography in human subjects. A novel analysis approach allowed approximating psychophysiological value functions for the experienced utility of monetary rewards. In addition, we identified the time windows of the event-related potentials (ERP) and the respective intracortical sources, in which variations in neuronal activity were significantly related to the value or valence of outcomes. Results indicate that value functions of experienced utility and regret disproportionally increase with monetary value, and thus contradict the compressing value functions of decision utility. The temporal pattern of outcome evaluation suggests an initial (∼250 ms) coarse evaluation regarding the valence, concurrent with a finer-grained evaluation of the value of gained rewards, whereas the evaluation of the value of omitted rewards emerges later. We hypothesize that this temporal double dissociation is explained by reward prediction errors. Finally, a late, yet unreported, reward-sensitive ERP topography (∼500 ms) was identified. The sources of these topographical covariations are estimated in the ventromedial prefrontal cortex, the medial frontal gyrus, the anterior and posterior cingulate cortex and the hippocampus/amygdala. The results provide important new evidence regarding “how,” “when,” and “where” the brain evaluates outcomes with different hedonic impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancer is a result of defects in the coordination of cell proliferation and programmed cell death. The extent of cell death is physiologically controlled by the activation of a programmed suicide pathway that results in a morphologically recognizable form of death termed apoptosis. Inducing apoptosis in tumor cells by gene therapy provides a potentially effective means to treat human cancers. The p84N5 is a novel nuclear death domain containing protein that has been shown to bind an amino terminal domain of retinoblastoma tumor suppressor gene product (pRb). Expression of N5 can induce apoptosis that is dependent upon its intact death domain and is inhibited by pRb. In many human cancer cells the functions of pRb are either lost through gene mutation or inactivated by different mechanisms. N5 based gene therapy may induce cell death preferentially in tumor cells relative to normal cells. We have demonstrated that N5 gene therapy is less toxic to normal cells than to tumor cells. To test the possibility that N5 could be used in gene therapy of cancer, we have generated a recombinant adenovirus engineered to express N5 and test the effects of viral infection on growth and tumorigenicity of human cancer cells. Adenovirus N5 infection significantly reduced the proliferation and tumorigenicity of breast, ovarian, and osteosarcoma tumor cell lines. Reduced proliferation and tumorigenicity were mediated by an induction of apoptosis as indicated by DNA fragmentation in infected cells. We also test the potential utility of N5 for gene therapy of pancreatic carcinoma that typically respond poorly to conventional treatment. Adenoviral mediated N5 gene transfer inhibits the growth of pancreatic cancer cell lines in vitro. N5 gene transfer also reduces the growth and metastasis of human pancreatic adenocarcinoma in subcutaneous and orthotopic mouse model. Interestingly, the pancreatic adenocarcinoma cells are more sensitive to N5 than they are to p53, suggesting that N5 gene therapy may be effective in tumors resistant to p53. We also test the possibilities of the use of N5 and p53 together on the inhibition of pancreatic cancer cell growth in vitro and vivo. Simultaneous use of N5 and RbΔCDK has been found to exert a greater extent on the inhibition of pancreatic cancer cell growth in vitro and in vivo. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on clues from epidemiology, low prenatal vitamin D has been proposed as a candidate risk factor for schizophrenia. Recent animal experiments have demonstrated that transient prenatal vitamin D deficiency is associated with persistent alterations in brain morphology and neurotrophin expression. In order to explore the utility of the vitamin D animal model of schizophrenia, we examined different types of learning and memory in adult rats exposed to transient prenatal vitamin D deficiency. Compared to control animals, the prenatally deplete animals had a significant impairment of latent inhibition, a feature often associated with schizophrenia. In addition, the deplete group was (a) significantly impaired on hole board habituation and (b) significantly better at maintaining previously learnt rules of brightness discrimination in a Y-chamber. In contrast, the prenatally deplete animals showed no impairment on the spatial learning task in the radial maze, nor on two-way active avoidance learning in the shuttle-box. The results indicate that transient prenatal vitamin D depletion in the rat is associated with subtle and discrete alterations in learning and memory. The behavioural phenotype associated with this animal model may provide insights into the neurobiological correlates of the cognitive impairments of schizophrenia. (c) 2005 Elsevier B.V. All rights reserved.