918 resultados para Random Regret Minimization
Resumo:
In line with the claim that regret plays a role in decision making, O’Connor, McCormack, and Feeney (2014) found that children who reported feeling sadder on discovering they had made a non-optimal choice were more likely to make a different choice next time round. We examined two issues of interpretation regarding this finding: whether the emotion measured was indeed regret, and whether it was the experience of this emotion rather than the ability to anticipate it that impacted on decision making. To address the first issue, we varied the degree to which children aged 6-7 were responsible for an outcome, assuming that responsibility is a necessary condition for regret. The second was addressed by examining whether children could accurately anticipate that they would feel worse on discovering they had made a non-optimal choice. Children were more likely to feel sad if they were responsible for the outcome; however even if they were not responsible, children were more likely than chance to report feeling sadder. Moreover, across all conditions feeling sadder was associated with making a better subsequent choice. In a separate task, we demonstrated that children of this age cannot accurately anticipate feeling sadder on discovering that they had not made the best choice. These findings suggest that although children may feel regret following a non-optimal choice, even if they were not responsible for an outcome they may experience another negative emotion such as frustration. Experiencing either of these emotions seems to be sufficient to support better decision making.
Resumo:
Models of complex systems with n components typically have order n<sup>2</sup> parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species’ trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.
Resumo:
Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.
Resumo:
We describe a pre-processing correlation attack on an FPGA implementation of AES, protected with a random clocking countermeasure that exhibits complex variations in both the location and amplitude of the power consumption patterns of the AES rounds. It is demonstrated that the merged round patterns can be pre-processed to identify and extract the individual round amplitudes, enabling a successful power analysis attack. We show that the requirement of the random clocking countermeasure to provide a varying execution time between processing rounds can be exploited to select a sub-set of data where sufficient current decay has occurred, further improving the attack. In comparison with the countermeasure's estimated security of 3 million traces from an integration attack, we show that through application of our proposed techniques that the countermeasure can now be broken with as few as 13k traces.
Resumo:
A new battery modelling method is presented based on the simulation error minimization criterion rather than the conventional prediction error criterion. A new integrated optimization method to optimize the model parameters is proposed. This new method is validated on a set of Li ion battery test data, and the results confirm the advantages of the proposed method in terms of the model generalization performance and long-term prediction accuracy.
Resumo:
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.
Resumo:
Camera traps are used to estimate densities or abundances using capture-recapture and, more recently, random encounter models (REMs). We deploy REMs to describe an invasive-native species replacement process, and to demonstrate their wider application beyond abundance estimation. The Irish hare Lepus timidus hibernicus is a high priority endemic of conservation concern. It is threatened by an expanding population of non-native, European hares L. europaeus, an invasive species of global importance. Camera traps were deployed in thirteen 1 km squares, wherein the ratio of invader to native densities were corroborated by night-driven line transect distance sampling throughout the study area of 1652 km2. Spatial patterns of invasive and native densities between the invader’s core and peripheral ranges, and native allopatry, were comparable between methods. Native densities in the peripheral range were comparable to those in native allopatry using REM, or marginally depressed using Distance Sampling. Numbers of the invader were substantially higher than the native in the core range, irrespective of method, with a 5:1 invader-to-native ratio indicating species replacement. We also describe a post hoc optimization protocol for REM which will inform subsequent (re-)surveys, allowing survey effort (camera hours) to be reduced by up to 57% without compromising the width of confidence intervals associated with density estimates. This approach will form the basis of a more cost-effective means of surveillance and monitoring for both the endemic and invasive species. The European hare undoubtedly represents a significant threat to the endemic Irish hare.
Resumo:
What is meant by the term random? Do we understand how to identify which type of randomisation to use in our future research projects? We, as researchers, often explain randomisation to potential research participants as being a 50/50 chance of selection to either an intervention or control group, akin to drawing numbers out of a hat. Is this an accurate explanation? And are all methods of randomisation equal? This paper aims to guide the researcher through the different techniques used to randomise participants with examples of how they can be used in educational research.
Resumo:
Although a number of studies have examined the developmental emergence of counterfactual emotions of regret and relief, none of these have used tasks that resemble those used with adolescents and adults, which typically involve risky decision making. We examined the development of the counterfactual emotions of regret and relief in two experiments using a task in which children chose between one of two gambles that varied in risk. In regret trials they always received the best prize from that gamble but were then shown that they would have obtained a better prize had they chosen the alternative gamble, whereas in relief trials the other prize was worse. We compared two methods of measuring regret and relief based on children’s reported emotion on discovering the outcome of the alternative gamble, one in which children judged whether they now felt the same, happier, or sadder on seeing the other prize and one in which children made emotion ratings on a 7-point scale after the other prize was revealed. On both these methods, we found that 6- to 7-year-olds’ and 8- to 9-year-olds’ emotions varied appropriately depending on whether the alternative outcome was better or worse than the prize they had actually obtained, although the former method was more sensitive. Our findings indicate that by at least 6-7 years, children experience the same sorts of counterfactual emotions as adults in risky decision making tasks, and also suggest that such emotions are best measured by asking children to make comparative emotion judgments.
Resumo:
É extensa a bibliografia dedicada a potenciais aplicações de materiais com mudança de fase na regulação térmica e no armazenamento de calor ou de frio. No entanto, a baixa condutividade térmica impõe limitações numa grande diversidade de aplicações com exigências críticas em termos de tempo de resposta curto ou com requisitos de elevada potência em ciclos de carga/descarga de calor latente. Foram desenvolvidos códigos numéricos no sentido de obter soluções precisas para descrever a cinética da transferência de calor com mudança de fase, com base em geometrias representativas, i.e. planar e esférica. Foram igualmente propostas soluções aproximadas, sendo identificados correspondentes critérios de validação em função das propriedades dos materiais de mudança de fase e de outros parâmetros relevantes tais como as escalas de tamanho e de tempo, etc. As referidas soluções permitiram identificar com rigor os fatores determinantes daquelas limitações, quantificar os correspondentes efeitos e estabelecer critérios de qualidade adequados para diferentes tipologias de potenciais aplicações. Os referidos critérios foram sistematizados de acordo com metodologias de seleção propostas por Ashby e co-autores, tendo em vista o melhor desempenho dos materiais em aplicações representativas, designadamente com requisitos ao nível de densidade energética, tempo de resposta, potência de carga/descarga e gama de temperaturas de operação. Nesta sistematização foram incluídos alguns dos compósitos desenvolvidos durante o presente trabalho. A avaliação das limitações acima mencionadas deu origem ao desenvolvimento de materiais compósitos para acumulação de calor ou frio, com acentuada melhoria de resposta térmica, mediante incorporação de uma fase com condutividade térmica muito superior à da matriz. Para este efeito, foram desenvolvidos modelos para otimizar a distribuição espacial da fase condutora, de modo a superar os limites de percolação previstos por modelos clássicos de condução em compósitos com distribuição aleatória, visando melhorias de desempenho térmico com reduzidas frações de fase condutora e garantindo que a densidade energética não é significativamente afetada. Os modelos elaborados correspondem a compósitos de tipo core-shell, baseados em microestruturas celulares da fase de elevada condutividade térmica, impregnadas com o material de mudança de fase propriamente dito. Além de visarem a minimização da fração de fase condutora e correspondentes custos, os modelos de compósitos propostos tiveram em conta a adequação a métodos de processamento versáteis, reprodutíveis, preferencialmente com base na emulsificação de líquidos orgânicos em suspensões aquosas ou outros processos de reduzidas complexidade e com base em materiais de baixo custo (material de mudança de fase e fase condutora). O design da distribuição microestrutural também considerou a possibilidade de orientação preferencial de fases condutoras com elevada anisotropia (p.e. grafite), mediante auto-organização. Outros estágios do projeto foram subordinados a esses objetivos de desenvolvimento de compósitos com resposta térmica otimizada, em conformidade com previsões dos modelos de compósitos de tipo core-shell, acima mencionadas. Neste enquadramento, foram preparados 3 tipos de compósitos com organização celular da fase condutora, com as seguintes características e metodologias: i) compósitos celulares parafina-grafite para acumulação de calor, preparados in-situ por emulsificação de uma suspensão de grafite em parafina fundida; ii) compósitos celulares parafina-Al2O3 para acumulação de calor, preparados por impregnação de parafina em esqueleto cerâmico celular de Al2O3; iii) compósitos celulares para acumulação de frio, obtidos mediante impregnação de matrizes celulares de grafite com solução de colagénio, após preparação prévia das matrizes de grafite celular. Os compósitos com esqueleto cerâmico (ii) requereram o desenvolvimento prévio de um método para o seu processamento, baseado na emulsificação de suspensões de Al2O3 em parafina fundida, com adequados aditivos dispersantes, tensioactivos e consolidantes do esqueleto cerâmico, tornando-o auto-suportável durante as fases posteriores de eliminação da parafina, até à queima a alta temperatura, originando cerâmicos celulares com adequada resistência mecânica. Os compósitos desenvolvidos apresentam melhorias significativos de condutividade térmica, atingindo ganhos superiores a 1 ordem de grandeza com frações de fase condutora inferior a 10 % vol. (4 W m-1 K-1), em virtude da organização core-shell e com o contributo adicional da anisotropia da grafite, mediante orientação preferencial. Foram ainda preparados compósitos de armazenamento de frio (iii), com orientação aleatória da fase condutora, obtidos mediante gelificação de suspensões de partículas de grafite em solução aquosa de colagénio. Apesar da estabilidade microestrutural e de forma, conferida por gelificação, estes compósitos confirmaram a esperada limitação dos compósitos com distribuição aleatória, em confronto com os ganhos alcançados com a organização de tipo core-shell.
Resumo:
The use of preference-based measures of health in the measurement of Health Related Quality of Life has become widely used in health economics. Hence, the development of preference-based measures of health has been a major concern for researchers throughout the world. This study aims to model health state preference data using a new preference-based measure of health (the SF- 6D) and to suggest alternative models for predicting health state utilities using fixed and random effects models. It also seeks to investigate the problems found in the SF-6D and to suggest eventual changes to it.