999 resultados para Heuristic-driven biases


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Behavioral finance, or behavioral economics, consists of a theoretical field of research stating that consequent psychological and behavioral variables are involved in financial activities such as corporate finance and investment decisions (i.e. asset allocation, portfolio management and so on). This field has known an increasing interest from scholar and financial professionals since episodes of multiple speculative bubbles and financial crises. Indeed, practical incoherencies between economic events and traditional neoclassical financial theories had pushed more and more researchers to look for new and broader models and theories. The purpose of this work is to present the field of research, still ill-known by a vast majority. This work is thus a survey that introduces its origins and its main theories, while contrasting them with traditional finance theories still predominant nowadays. The main question guiding this work would be to see if this area of inquiry is able to provide better explanations for real life market phenomenon. For that purpose, the study will present some market anomalies unsolved by traditional theories, which have been recently addressed by behavioral finance researchers. In addition, it presents a practical application of portfolio management, comparing asset allocation under the traditional Markowitz’s approach to the Black-Litterman model, which incorporates some features of behavioral finance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Utilizando-se de uma amostra de movimentações diárias de fundos de investimento em ações, multimercados e renda fixa no Brasil, por meio de uma metodologia baseada na direção das captações líquidas de um grande número de fundos de investimento, agregados em grupos de investidores de acordo com o porte médio de seu investimento (ricos e pobres), foi encontrada forte evidência da ocorrência de efeito manada de forma heterogênea entre diferentes grupos de investidores, sendo que a intensidade do efeito manada varia de acordo com o porte do investidor, tipo de fundo e com a época. Também foi testado um viés de heurística: a ancoragem de preço, que supõe que após uma nova máxima ou mínima histórica nos preços das ações, haverá uma movimentação anormal de investidores, que acreditam ser este evento um indicador sobre os preços futuros. Encontrou-se evidência de que este fenômeno ocorre em diferentes tipos de fundos de investimento, não apenas os fundos de investimento em ações, e que tem maior impacto quando há uma nova mínima do que quando há uma cotação recorde no índice Ibovespa. Entretanto, o poder de explicação deste viés sobre o efeito manada é pequeno, e há uma série de variáveis ainda não exploradas que têm maior poder de explicação sobre o efeito manada. Desta maneira, este estudo encontrou evidências de que os pressupostos de finanças comportamentais de que a informação e as expectativas dos investidores não são homogêneas, e que os investidores são influenciáveis pelas decisões de outros investidores, estão corretos, mas que há fraca evidência que o viés de heurística de ancoragem de preço tenha papel relevante no comportamento dos investidores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Routing is a very important step in VLSI physical design. A set of nets are routed under delay and resource constraints in multi-net global routing. In this paper a delay-driven congestion-aware global routing algorithm is developed, which is a heuristic based method to solve a multi-objective NP-hard optimization problem. The proposed delay-driven Steiner tree construction method is of O(n(2) log n) complexity, where n is the number of terminal points and it provides n-approximation solution of the critical time minimization problem for a certain class of grid graphs. The existing timing-driven method (Hu and Sapatnekar, 2002) has a complexity O(n(4)) and is implemented on nets with small number of sinks. Next we propose a FPTAS Gradient algorithm for minimizing the total overflow. This is a concurrent approach considering all the nets simultaneously contrary to the existing approaches of sequential rip-up and reroute. The algorithms are implemented on ISPD98 derived benchmarks and the drastic reduction of overflow is observed. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic variation at the serotonin transporter-linked polymorphic region (5-HTTLPR) is associated with altered amygdala reactivity and lack of prefrontal regulatory control. Similar regions mediate decision-making biases driven by contextual cues and ambiguity, for example the "framing effect." We hypothesized that individuals hemozygous for the short (s) allele at the 5-HTTLPR would be more susceptible to framing. Participants, selected as homozygous for either the long (la) or s allele, performed a decision-making task where they made choices between receiving an amount of money for certain and taking a gamble. A strong bias was evident toward choosing the certain option when the option was phrased in terms of gains and toward gambling when the decision was phrased in terms of losses (the frame effect). Critically, this bias was significantly greater in the ss group compared with the lala group. In simultaneously acquired functional magnetic resonance imaging data, the ss group showed greater amygdala during choices made in accord, compared with those made counter to the frame, an effect not seen in the lala group. These differences were also mirrored by differences in anterior cingulate-amygdala coupling between the genotype groups during decision making. Specifically, lala participants showed increased coupling during choices made counter to, relative to those made in accord with, the frame, with no such effect evident in ss participants. These data suggest that genetically mediated differences in prefrontal-amygdala interactions underpin interindividual differences in economic decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexicalpragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels. Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples. Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration, nous proposons une méthode de regroupement logiciel (Software Clustering) basée sur une méta-heuristique guidée par des exemples de regroupement. De la même façon, pour les transformations générales, nous apprenons des transformations entre modèles en utilisant un algorithme de programmation génétique qui s’inspire des exemples des transformations passées. Pour la définition précise de langages de modélisation, nous proposons une méthode basée sur une recherche méta-heuristique, qui dérive des règles de bonne formation pour les méta-modèles, avec l’objectif de bien discriminer entre modèles valides et invalides. Les études empiriques que nous avons menées, montrent que les approches proposées obtiennent des bons résultats tant quantitatifs que qualitatifs. Ceux-ci nous permettent de conclure que l’amélioration de l’automatisation du MDE en utilisant des méthodes de recherche méta-heuristique et des exemples peut contribuer à l’adoption plus large de MDE dans l’industrie à là venir.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies of ignorance-driven decision making have been employed to analyse when ignorance should prove advantageous on theoretical grounds or else they have been employed to examine whether human behaviour is consistent with an ignorance-driven inference strategy (e. g., the recognition heuristic). In the current study we examine whether-under conditions where such inferences might be expected-the advantages that theoretical analyses predict are evident in human performance data. A single experiment shows that, when asked to make relative wealth judgements, participants reliably use recognition as a basis for their judgements. Their wealth judgements under these conditions are reliably more accurate when some of the target names are unknown than when participants recognize all of the names (a "less-is-more effect"). These results are consistent across a number of variations: the number of options given to participants and the nature of the wealth judgement. A basic model of recognition-based inference predicts these effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Fast & frugal” heuristics represent an appealing way of implementing bounded rationality and decision-making under pressure. The recognition heuristic is the simplest and most fundamental of these heuristics. Simulation and experimental studies have shown that this ignorance-driven heuristic inference can prove superior to knowledge based inference (Borges, Goldstein, Ortman & Gigerenzer, 1999; Goldstein & Gigerenzer, 2002) and have shown how the heuristic could develop from ACT-R’s forgetting function (Schooler & Hertwig, 2005). Mathematical analyses also demonstrate that, under certain conditions, a “less-is-more effect” will always occur (Goldstein & Gigerenzer, 2002). The further analyses presented in this paper show, however, that these conditions may constitute a special case and that the less-is-more effect in decision-making is subject to the moderating influence of the number of options to be considered and the framing of the question.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inference on the basis of recognition alone is assumed to occur prior to accessing further information (Pachur & Hertwig, 2006). A counterintuitive result of this is the “less-is-more” effect: a drop in the accuracy with which choices are made as to which of two or more items scores highest on a given criterion as more items are learned (Frosch, Beaman & McCloy, 2007; Goldstein & Gigerenzer, 2002). In this paper, we show that less-is-more effects are not unique to recognition-based inference but can also be observed with a knowledge-based strategy provided two assumptions, limited information and differential access, are met. The LINDA model which embodies these assumptions is presented. Analysis of the less-is-more effects predicted by LINDA and by recognition-driven inference shows that these occur for similar reasons and casts doubt upon the “special” nature of recognition-based inference. Suggestions are made for empirical tests to compare knowledge-based and recognition-based less-is-more effects

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urbanization is one of the major forms of habitat alteration occurring at the present time. Although this is typically deleterious to biodiversity, some species flourish within these human-modified landscapes, potentially leading to negative and/or positive interactions between people and wildlife. Hence, up-to-date assessment of urban wildlife populations is important for developing appropriate management strategies. Surveying urban wildlife is limited by land partition and private ownership, rendering many common survey techniques difficult. Garnering public involvement is one solution, but this method is constrained by the inherent biases of non-standardised survey effort associated with voluntary participation. We used a television-led media approach to solicit national participation in an online sightings survey to investigate changes in the distribution of urban foxes in Great Britain and to explore relationships between urban features and fox occurrence and sightings density. Our results show that media-based approaches can generate a large national database on the current distribution of a recognisable species. Fox distribution in England and Wales has changed markedly within the last 25 years, with sightings submitted from 91% of urban areas previously predicted to support few or no foxes. Data were highly skewed with 90% of urban areas having <30 fox sightings per 1000 people km-2. The extent of total urban area was the only variable with a significant impact on both fox occurrence and sightings density in urban areas; longitude and percentage of public green urban space were respectively, significantly positively and negatively associated with sightings density only. Latitude, and distance to nearest neighbouring conurbation had no impact on either occurrence or sightings density. Given the limitations associated with this method, further investigations are needed to determine the association between sightings density and actual fox density, and variability of fox density within and between urban areas in Britain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Current models of concomitant, intermittent strabismus, heterophoria, convergence and accommodation anomalies are either theoretically complex or incomplete. We propose an alternative and more practical way to conceptualize clinical patterns. Methods. In each of three hypothetical scenarios (normal; high AC/A and low CA/C ratios; low AC/A and high CA/C ratios) there can be a disparity-biased or blur-biased “style”, despite identical ratios. We calculated a disparity bias index (DBI) to reflect these biases. We suggest how clinical patterns fit these scenarios and provide early objective data from small illustrative clinical groups. Results. Normal adults and children showed disparity bias (adult DBI 0.43 (95%CI 0.50-0.36), child DBI 0.20 (95%CI 0.31-0.07) (p=0.001). Accommodative esotropes showed less disparity-bias (DBI 0.03). In the high AC/A and low CA/C scenario, early presbyopes had mean DBI of 0.17 (95%CI 0.28-0.06), compared to DBI of -0.31 in convergence excess esotropes. In the low AC/A and high CA/C scenario near exotropes had mean DBI of 0.27, while we predict that non-strabismic, non-amblyopic hyperopes with good vision without spectacles will show lower DBIs. Disparity bias ranged between 1.25 and -1.67. Conclusions. Establishing disparity or blur bias, together with knowing whether convergence to target demand exceeds accommodation or vice versa explains clinical patterns more effectively than AC/A and CA/C ratios alone. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. We suggest clinicians look carefully at details of accommodation and convergence changes induced by lenses, dissociation and prisms and use these to plan treatment in relation to the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analysis of diabatic heating and moistening processes from 12-36 hour lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 hours is chosen to constrain the large scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up for the models as they adjust to being driven from the YOTC analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large scale dynamics is reasonably constrained, moistening and heating profiles have large inter-model spread. In particular, there are large spreads in convective heating and moistening at mid-levels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behaviour shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Substantial biases in shortwave cloud forcing (SWCF) of up to ±30 W m−2are found in the midlatitudes of the Southern Hemisphere in the historical simulations of 34 CMIP5 coupled general circulation models. The SWCF biases are shown to induce surface temperature anomalies localized in the midlatitudes, and are significantly correlated with the mean latitude of the eddy-driven jet, with a negative SWCF bias corresponding to an equatorward jet latitude bias. Aquaplanet model experiments are performed to demonstrate that the jet latitude biases are primarily induced by the midlatitude SWCF anomalies, such that the jet moves toward (away from) regions of enhanced (reduced) temperature gradients. The results underline the necessity of accurately representing cloud radiative forcings in state-of-the-art coupled models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Foundries can be found all over Brazil and they are very important to its economy. In 2008, a mixed integer-programming model for small market-driven foundries was published, attempting to minimize delivery delays. We undertook a study of that model. Here, we present a new approach based on the decomposition of the problem into two sub-problems: production planning of alloys and production planning of items. Both sub-problems are solved using a Lagrangian heuristic based on transferences. An important aspect of the proposed heuristic is its ability to take into account a secondary practice objective solution: the furnace waste. Computational tests show that the approach proposed here is able to generate good quality solutions that outperform prior results. Journal of the Operational Research Society (2010) 61, 108-114. doi:10.1057/jors.2008.151