895 resultados para Hierarchical task analysis
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
The main purpose of the work described in this paper is to examine the extent to which the L2 developmental changes predicted by Kroll and Stewart's (1994) Revised Hierarchical Model (RHM) can be understood by word association response behaviour. The RHM attempts to account for the relative “strength of the links between words and concepts in each of the bilingual's languages” (Kroll, Van Hell, Tokowicz & Green, 2010, p. 373). It proposes that bilinguals with higher L2 proficiency tend to rely less on mediation, while less proficient L2 learners tend to rely on mediation and access L2 words by translating from L1 equivalents. In this paper, I present findings from a simple word association task. More proficient learners provided a greater proportion of collocational links, suggesting that they mediate less when compared to less proficient learners. The results provide tentative support for Kroll and Stewart's model
Resumo:
Despite the increasing number of studies examining the correlates of interest and boredom, surprisingly little research has focused on within-person fluctuations in these emotions, making it difficult to describe their situational nature. To address this gap in the literature, this study conducted repeated measurements (12 times) on a sample of 158 undergraduate students using a variety of self-report assessments, and examined the within-person relationships between task-specific perceptions (expectancy, utility, and difficulty) and interest and boredom. This study further explored the role of achievement goals in predicting between-person differences in these within-person relationships. Utilizing hierarchical-linear modeling, we found that, on average, a higher perception of both expectancy and utility, as well as a lower perception of difficulty, was associated with higher interest and lower boredom levels within individuals. Moreover, mastery-approach goals weakened the negative within-person relationship between difficulty and interest and the negative within-person relationship between utility and boredom. Mastery-avoidance and performance-avoidance goals strengthened the negative relationship between expectancy and boredom. These results suggest how educators can more effectively instruct students with different types of goals, minimizing boredom and maximizing interest and learning.
Resumo:
In probabilistic decision tasks, an expected value (EV) of a choice is calculated, and after the choice has been made, this can be updated based on a temporal difference (TD) prediction error between the EV and the reward magnitude (RM) obtained. The EV is measured as the probability of obtaining a reward x RM. To understand the contribution of different brain areas to these decision-making processes, functional magnetic resonance imaging activations related to EV versus RM (or outcome) were measured in a probabilistic decision task. Activations in the medial orbitofrontal cortex were correlated with both RM and with EV and confirmed in a conjunction analysis to extend toward the pregenual cingulate cortex. From these representations, TD reward prediction errors could be produced. Activations in areas that receive from the orbitofrontal cortex including the ventral striatum, midbrain, and inferior frontal gyrus were correlated with the TD error. Activations in the anterior insula were correlated negatively with EV, occurring when low reward outcomes were expected, and also with the uncertainty of the reward, implicating this region in basic and crucial decision-making parameters, low expected outcomes, and uncertainty.
Resumo:
Reliable evidence of trends in the illegal ivory trade is important for informing decision making for elephants but it is difficult to obtain due to the covert nature of the trade. The Elephant Trade Information System, a global database of reported seizures of illegal ivory, holds the only extensive information on illicit trade available. However inherent biases in seizure data make it difficult to infer trends; countries differ in their ability to make and report seizures and these differences cannot be directly measured. We developed a new modelling framework to provide quantitative evidence on trends in the illegal ivory trade from seizures data. The framework used Bayesian hierarchical latent variable models to reduce bias in seizures data by identifying proxy variables that describe the variability in seizure and reporting rates between countries and over time. Models produced bias-adjusted smoothed estimates of relative trends in illegal ivory activity for raw and worked ivory in three weight classes. Activity is represented by two indicators describing the number of illegal ivory transactions--Transactions Index--and the total weight of illegal ivory transactions--Weights Index--at global, regional or national levels. Globally, activity was found to be rapidly increasing and at its highest level for 16 years, more than doubling from 2007 to 2011 and tripling from 1998 to 2011. Over 70% of the Transactions Index is from shipments of worked ivory weighing less than 10 kg and the rapid increase since 2007 is mainly due to increased consumption in China. Over 70% of the Weights Index is from shipments of raw ivory weighing at least 100 kg mainly moving from Central and East Africa to Southeast and East Asia. The results tie together recent findings on trends in poaching rates, declining populations and consumption and provide detailed evidence to inform international decision making on elephants.
Resumo:
This paper presents a hierarchical clustering method for semantic Web service discovery. This method aims to improve the accuracy and efficiency of the traditional service discovery using vector space model. The Web service is converted into a standard vector format through the Web service description document. With the help of WordNet, a semantic analysis is conducted to reduce the dimension of the term vector and to make semantic expansion to meet the user’s service request. The process and algorithm of hierarchical clustering based semantic Web service discovery is discussed. Validation is carried out on the dataset.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
We define and experimentally test a public provision mechanism that meets three basic ethical requirements and allows community members to influence, via monetary bids, which of several projects is implemented. For each project, participants are assigned personal values, which can be positive or negative. We provide either public or private information about personal values. This produces two distinct public provision games, which are experimentally implemented and analyzed for various projects. In spite of the complex experimental task, participants do not rely on bidding their own personal values as an obvious simple heuristic whose general acceptance would result in fair and efficient outcomes. Rather, they rely on strategic underbidding. Although underbidding is affected by projects’ characteristics, the provision mechanism mostly leads to the implementation of the most efficient project.
Resumo:
Background Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. Methods We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m2 [underweight], 18·5 kg/m2 to <20 kg/m2, 20 kg/m2 to <25 kg/m2, 25 kg/m2 to <30 kg/m2, 30 kg/m2 to <35 kg/m2, 35 kg/m2 to <40 kg/m2, ≥40 kg/m2 [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. Findings We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m2 (95% credible interval 21·3–22·1) in 1975 to 24·2 kg/m2 (24·0–24·4) in 2014 in men, and from 22·1 kg/m2 (21·7–22·5) in 1975 to 24·4 kg/m2 (24·2–24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m2 in central Africa and south Asia to 29·2 kg/m2 (28·6–29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m2 (21·4–22·3) in south Asia to 32·2 kg/m2 (31·5–32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5–17·4) to 8·8% (7·4–10·3) in men and from 14·6% (11·6–17·9) to 9·7% (8·3–11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8–29·2) in men and 24·0% (18·9–29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4–4·1) in 1975 to 10·8% (9·7–12·0) in 2014 in men, and from 6·4% (5·1–7·8) to 14·9% (13·6–16·1) in women. 2·3% (2·0–2·7) of the world's men and 5·0% (4·4–5·6) of women were severely obese (ie, have BMI ≥35 kg/m2). Globally, prevalence of morbid obesity was 0·64% (0·46–0·86) in men and 1·6% (1·3–1·9) in women. Interpretation If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia.
Resumo:
Evolutionary novelties in the skeleton are usually expressed as changes in the timing of growth of features intrinsically integrated at different hierarchical levels of development(1). As a consequence, most of the shape- traits observed across species do vary quantitatively rather than qualitatively(2), in a multivariate space(3) and in a modularized way(4,5). Because most phylogenetic analyses normally use discrete, hypothetically independent characters(6), previous attempts have disregarded the phylogenetic signals potentially enclosed in the shape of morphological structures. When analysing low taxonomic levels, where most variation is quantitative in nature, solving basic requirements like the choice of characters and the capacity of using continuous, integrated traits is of crucial importance in recovering wider phylogenetic information. This is particularly relevant when analysing extinct lineages, where available data are limited to fossilized structures. Here we show that when continuous, multivariant and modularized characters are treated as such, cladistic analysis successfully solves relationships among main Homo taxa. Our attempt is based on a combination of cladistics, evolutionary- development- derived selection of characters, and geometric morphometrics methods. In contrast with previous cladistic analyses of hominid phylogeny, our method accounts for the quantitative nature of the traits, and respects their morphological integration patterns. Because complex phenotypes are observable across different taxonomic groups and are potentially informative about phylogenetic relationships, future analyses should point strongly to the incorporation of these types of trait.
Resumo:
Several accounts put forth to explain the flash-lag effect (FLE) rely mainly on either spatial or temporal mechanisms. Here we investigated the relationship between these mechanisms by psychophysical and theoretical approaches. In a first experiment we assessed the magnitudes of the FLE and temporal-order judgments performed under identical visual stimulation. The results were interpreted by means of simulations of an artificial neural network, that wits also employed to make predictions concerning the F LE. The model predicted that a spatio-temporal mislocalisation would emerge from two, continuous and abrupt-onset, moving stimuli. Additionally, a straightforward prediction of the model revealed that the magnitude of this mislocalisation should be task-dependent, increasing when the use of the abrupt-onset moving stimulus switches from a temporal marker only to both temporal and spatial markers. Our findings confirmed the model`s predictions and point to an indissoluble interplay between spatial facilitation and processing delays in the FLE.
Resumo:
Point placement strategies aim at mapping data points represented in higher dimensions to bi-dimensional spaces and are frequently used to visualize relationships amongst data instances. They have been valuable tools for analysis and exploration of data sets of various kinds. Many conventional techniques, however, do not behave well when the number of dimensions is high, such as in the case of documents collections. Later approaches handle that shortcoming, but may cause too much clutter to allow flexible exploration to take place. In this work we present a novel hierarchical point placement technique that is capable of dealing with these problems. While good grouping and separation of data with high similarity is maintained without increasing computation cost, its hierarchical structure lends itself both to exploration in various levels of detail and to handling data in subsets, improving analysis capability and also allowing manipulation of larger data sets.
Resumo:
In this work we introduce a new hierarchical surface decomposition method for multiscale analysis of surface meshes. In contrast to other multiresolution methods, our approach relies on spectral properties of the surface to build a binary hierarchical decomposition. Namely, we utilize the first nontrivial eigenfunction of the Laplace-Beltrami operator to recursively decompose the surface. For this reason we coin our surface decomposition the Fiedler tree. Using the Fiedler tree ensures a number of attractive properties, including: mesh-independent decomposition, well-formed and nearly equi-areal surface patches, and noise robustness. We show how the evenly distributed patches can be exploited for generating multiresolution high quality uniform meshes. Additionally, our decomposition permits a natural means for carrying out wavelet methods, resulting in an intuitive method for producing feature-sensitive meshes at multiple scales. Published by Elsevier Ltd.
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A continuous version of the hierarchical spherical model at dimension d=4 is investigated. Two limit distributions of the block spin variable X(gamma), normalized with exponents gamma = d + 2 and gamma=d at and above the critical temperature, are established. These results are proven by solving certain evolution equations corresponding to the renormalization group (RG) transformation of the O(N) hierarchical spin model of block size L(d) in the limit L down arrow 1 and N ->infinity. Starting far away from the stationary Gaussian fixed point the trajectories of these dynamical system pass through two different regimes with distinguishable crossover behavior. An interpretation of this trajectories is given by the geometric theory of functions which describe precisely the motion of the Lee-Yang zeroes. The large-N limit of RG transformation with L(d) fixed equal to 2, at the criticality, has recently been investigated in both weak and strong (coupling) regimes by Watanabe (J. Stat. Phys. 115:1669-1713, 2004) . Although our analysis deals only with N = infinity case, it complements various aspects of that work.