16 resultados para conditional expected utility


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers forecasting the conditional mean and variance from a single-equation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with time-dependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum MSE predictor and the conditional MSE are presented. We also derive the formula for all the theoretical moments of the prediction error distribution from a general dynamic model with GARCH(1, 1) innovations. These results are then used in the construction of ex ante prediction confidence intervals by means of the Cornish-Fisher asymptotic expansion. An empirical example relating to the uncertainty of the expected depreciation of foreign exchange rates illustrates the usefulness of the results. © 1992.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We exploit the distributional information contained in high-frequency intraday data in constructing a simple conditional moment estimator for stochastic volatility diffusions. The estimator is based on the analytical solutions of the first two conditional moments for the latent integrated volatility, the realization of which is effectively approximated by the sum of the squared high-frequency increments of the process. Our simulation evidence indicates that the resulting GMM estimator is highly reliable and accurate. Our empirical implementation based on high-frequency five-minute foreign exchange returns suggests the presence of multiple latent stochastic volatility factors and possible jumps. © 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While advances in regenerative medicine and vascular tissue engineering have been substantial in recent years, important stumbling blocks remain. In particular, the limited life span of differentiated cells that are harvested from elderly human donors is an important limitation in many areas of regenerative medicine. Recently, a mutant of the human telomerase reverse transcriptase enzyme (TERT) was described, which is highly processive and elongates telomeres more rapidly than conventional telomerase. This mutant, called pot1-TERT, is a chimeric fusion between the DNA binding protein pot1 and TERT. Because pot1-TERT is highly processive, it is possible that transient delivery of this transgene to cells that are utilized in regenerative medicine applications may elongate telomeres and extend cellular life span while avoiding risks that are associated with retroviral or lentiviral vectors. In the present study, adenoviral delivery of pot1-TERT resulted in transient reconstitution of telomerase activity in human smooth muscle cells, as demonstrated by telomeric repeat amplification protocol (TRAP). In addition, human engineered vessels that were cultured using pot1-TERT-expressing cells had greater collagen content and somewhat better performance in vivo than control grafts. Hence, transient delivery of pot1-TERT to elderly human cells may be useful for increasing cellular life span and improving the functional characteristics of resultant tissue-engineered constructs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An enduring challenge for the policy and political sciences is valid and reliable depiction of policy designs. One emerging approach for dissecting policy designs is the application of Sue Crawford and Elinor Ostrom's institutional grammar tool. The grammar tool offers a method to identify, systematically, the core elements that comprise policies, including target audiences, expected patterns of behavior, and formal modes of sanctioning for noncompliance. This article provides three contributions to the study of policy designs by developing and applying the institutional grammar tool. First, we provide revised guidelines for applying the institutional grammar tool to the study of policy design. Second, an additional component to the grammar, called the oBject, is introduced. Third, we apply the modified grammar tool to four policies that shape Colorado State Aquaculture to demonstrate its effectiveness and utility in illuminating institutional linkages across levels of analysis. The conclusion summarizes the contributions of the article as well as points to future research and applications of the institutional grammar tool. © 2011 Policy Studies Organization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What is the relationship between the design of regulations and levels of individual compliance? To answer this question, Crawford and Ostrom's institutional grammar tool is used to deconstruct regulations governing the aquaculture industry in Colorado, USA. Compliance with the deconstructed regulatory components is then assessed based on the perceptions of the appropriateness of the regulations, involvement in designing the regulations, and intrinsic and extrinsic motivations. The findings suggest that levels of compliance with regulations vary across and within individuals regarding various aspects of the regulatory components. As expected, the level of compliance is affected by the perceived appropriateness of regulations, participation in designing the regulations, and feelings of guilt and fear of social disapproval. Furthermore, there is a strong degree of interdependence among the written components, as identified by the institutional grammar tool, in affecting compliance levels. The paper contributes to the regulation and compliance literature by illustrating the utility of the institutional grammar tool in understanding regulatory content, applying a new Q-Sort technique for measuring individual levels of compliance, and providing a rare exploration into feelings of guilt and fear outside of the laboratory setting. © 2012 Blackwell Publishing Asia Pty Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze the cost-effectiveness of electric utility ratepayer-funded programs to promote demand-side management (DSM) and energy efficiency (EE) investments. We specify a model that relates electricity demand to previous EE DSM spending, energy prices, income, weather, and other demand factors. In contrast to previous studies, we allow EE DSM spending to have a potential longterm demand effect and explicitly address possible endogeneity in spending. We find that current period EE DSM expenditures reduce electricity demand and that this effect persists for a number of years. Our findings suggest that ratepayer funded DSM expenditures between 1992 and 2006 produced a central estimate of 0.9 percent savings in electricity consumption over that time period and a 1.8 percent savings over all years. These energy savings came at an expected average cost to utilities of roughly 5 cents per kWh saved when future savings are discounted at a 5 percent rate. Copyright © 2012 by the IAEE. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Genetic association studies are conducted to discover genetic loci that contribute to an inherited trait, identify the variants behind these associations and ascertain their functional role in determining the phenotype. To date, functional annotations of the genetic variants have rarely played more than an indirect role in assessing evidence for association. Here, we demonstrate how these data can be systematically integrated into an association study's analysis plan. RESULTS: We developed a Bayesian statistical model for the prior probability of phenotype-genotype association that incorporates data from past association studies and publicly available functional annotation data regarding the susceptibility variants under study. The model takes the form of a binary regression of association status on a set of annotation variables whose coefficients were estimated through an analysis of associated SNPs in the GWAS Catalog (GC). The functional predictors examined included measures that have been demonstrated to correlate with the association status of SNPs in the GC and some whose utility in this regard is speculative: summaries of the UCSC Human Genome Browser ENCODE super-track data, dbSNP function class, sequence conservation summaries, proximity to genomic variants in the Database of Genomic Variants and known regulatory elements in the Open Regulatory Annotation database, PolyPhen-2 probabilities and RegulomeDB categories. Because we expected that only a fraction of the annotations would contribute to predicting association, we employed a penalized likelihood method to reduce the impact of non-informative predictors and evaluated the model's ability to predict GC SNPs not used to construct the model. We show that the functional data alone are predictive of a SNP's presence in the GC. Further, using data from a genome-wide study of ovarian cancer, we demonstrate that their use as prior data when testing for association is practical at the genome-wide scale and improves power to detect associations. CONCLUSIONS: We show how diverse functional annotations can be efficiently combined to create 'functional signatures' that predict the a priori odds of a variant's association to a trait and how these signatures can be integrated into a standard genome-wide-scale association analysis, resulting in improved power to detect truly associated variants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

UNLABELLED: • PREMISE OF THE STUDY: Understanding fern (monilophyte) phylogeny and its evolutionary timescale is critical for broad investigations of the evolution of land plants, and for providing the point of comparison necessary for studying the evolution of the fern sister group, seed plants. Molecular phylogenetic investigations have revolutionized our understanding of fern phylogeny, however, to date, these studies have relied almost exclusively on plastid data.• METHODS: Here we take a curated phylogenomics approach to infer the first broad fern phylogeny from multiple nuclear loci, by combining broad taxon sampling (73 ferns and 12 outgroup species) with focused character sampling (25 loci comprising 35877 bp), along with rigorous alignment, orthology inference and model selection.• KEY RESULTS: Our phylogeny corroborates some earlier inferences and provides novel insights; in particular, we find strong support for Equisetales as sister to the rest of ferns, Marattiales as sister to leptosporangiate ferns, and Dennstaedtiaceae as sister to the eupolypods. Our divergence-time analyses reveal that divergences among the extant fern orders all occurred prior to ∼200 MYA. Finally, our species-tree inferences are congruent with analyses of concatenated data, but generally with lower support. Those cases where species-tree support values are higher than expected involve relationships that have been supported by smaller plastid datasets, suggesting that deep coalescence may be reducing support from the concatenated nuclear data.• CONCLUSIONS: Our study demonstrates the utility of a curated phylogenomics approach to inferring fern phylogeny, and highlights the need to consider underlying data characteristics, along with data quantity, in phylogenetic studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimation of the skeleton of a directed acyclic graph (DAG) is of great importance for understanding the underlying DAG and causal effects can be assessed from the skeleton when the DAG is not identifiable. We propose a novel method named PenPC to estimate the skeleton of a high-dimensional DAG by a two-step approach. We first estimate the nonzero entries of a concentration matrix using penalized regression, and then fix the difference between the concentration matrix and the skeleton by evaluating a set of conditional independence hypotheses. For high-dimensional problems where the number of vertices p is in polynomial or exponential scale of sample size n, we study the asymptotic property of PenPC on two types of graphs: traditional random graphs where all the vertices have the same expected number of neighbors, and scale-free graphs where a few vertices may have a large number of neighbors. As illustrated by extensive simulations and applications on gene expression data of cancer patients, PenPC has higher sensitivity and specificity than the state-of-the-art method, the PC-stable algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Risk assessment with a thorough family health history is recommended by numerous organizations and is now a required component of the annual physical for Medicare beneficiaries under the Affordable Care Act. However, there are several barriers to incorporating robust risk assessments into routine care. MeTree, a web-based patient-facing health risk assessment tool, was developed with the aim of overcoming these barriers. In order to better understand what factors will be instrumental for broader adoption of risk assessment programs like MeTree in clinical settings, we obtained funding to perform a type III hybrid implementation-effectiveness study in primary care clinics at five diverse healthcare systems. Here, we describe the study's protocol. METHODS/DESIGN: MeTree collects personal medical information and a three-generation family health history from patients on 98 conditions. Using algorithms built entirely from current clinical guidelines, it provides clinical decision support to providers and patients on 30 conditions. All adult patients with an upcoming well-visit appointment at one of the 20 intervention clinics are eligible to participate. Patient-oriented risk reports are provided in real time. Provider-oriented risk reports are uploaded to the electronic medical record for review at the time of the appointment. Implementation outcomes are enrollment rate of clinics, providers, and patients (enrolled vs approached) and their representativeness compared to the underlying population. Primary effectiveness outcomes are the percent of participants newly identified as being at increased risk for one of the clinical decision support conditions and the percent with appropriate risk-based screening. Secondary outcomes include percent change in those meeting goals for a healthy lifestyle (diet, exercise, and smoking). Outcomes are measured through electronic medical record data abstraction, patient surveys, and surveys/qualitative interviews of clinical staff. DISCUSSION: This study evaluates factors that are critical to successful implementation of a web-based risk assessment tool into routine clinical care in a variety of healthcare settings. The result will identify resource needs and potential barriers and solutions to implementation in each setting as well as an understanding potential effectiveness. TRIAL REGISTRATION: NCT01956773.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Risk-stratified guidelines can improve quality of care and cost-effectiveness, but their uptake in primary care has been limited. MeTree, a Web-based, patient-facing risk-assessment and clinical decision support tool, is designed to facilitate uptake of risk-stratified guidelines. METHODS: A hybrid implementation-effectiveness trial of three clinics (two intervention, one control). PARTICIPANTS: consentable nonadopted adults with upcoming appointments. PRIMARY OUTCOME: agreement between patient risk level and risk management for those meeting evidence-based criteria for increased-risk risk-management strategies (increased risk) and those who do not (average risk) before MeTree and after. MEASURES: chart abstraction was used to identify risk management related to colon, breast, and ovarian cancer, hereditary cancer, and thrombosis. RESULTS: Participants = 488, female = 284 (58.2%), white = 411 (85.7%), mean age = 58.7 (SD = 12.3). Agreement between risk management and risk level for all conditions for each participant, except for colon cancer, which was limited to those <50 years of age, was (i) 1.1% (N = 2/174) for the increased-risk group before MeTree and 16.1% (N = 28/174) after and (ii) 99.2% (N = 2,125/2,142) for the average-risk group before MeTree and 99.5% (N = 2,131/2,142) after. Of those receiving increased-risk risk-management strategies at baseline, 10.5% (N = 2/19) met criteria for increased risk. After MeTree, 80.7% (N = 46/57) met criteria. CONCLUSION: MeTree integration into primary care can improve uptake of risk-stratified guidelines and potentially reduce "overuse" and "underuse" of increased-risk services.Genet Med 18 10, 1020-1028.