800 resultados para task performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper employs a vector autoregressive model to investigate the impact of macroeconomic and financial variables on a UK real estate return series. The results indicate that unexpected inflation, and the interest rate term spread have explanatory powers for the property market. However, the most significant influence on the real estate series are the lagged values of the real estate series themselves. We conclude that identifying the factors that have determined UK property returns over the past twelve years remains a difficult task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have documented that self-determined choice does indeed enhance performance. However, the precise neural mechanisms underlying this effect are not well understood. We examined the neural correlates of the facilitative effects of self-determined choice using functional magnetic resonance imaging (fMRI). Participants played a game-like task involving a stopwatch with either a stopwatch they selected (self-determined-choice condition) or one they were assigned without choice (forced-choice condition). Our results showed that self-determined choice enhanced performance on the stopwatch task, despite the fact that the choices were clearly irrelevant to task difficulty. Neuroimaging results showed that failure feedback, compared with success feedback, elicited a drop in the vmPFC activation in the forced-choice condition, but not in the self-determined-choice condition, indicating that negative reward value associated with the failure feedback vanished in the self-determined-choice condition. Moreover, the vmPFC resilience to failure in the self-determined-choice condition was significantly correlated with the increased performance. Striatal responses to failure and success feedback were not modulated by the choice condition, indicating the dissociation between the vmPFC and striatal activation pattern. These findings suggest that the vmPFC plays a unique and critical role in the facilitative effects of self-determined choice on performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the increasing number of studies examining the correlates of interest and boredom, surprisingly little research has focused on within-person fluctuations in these emotions, making it difficult to describe their situational nature. To address this gap in the literature, this study conducted repeated measurements (12 times) on a sample of 158 undergraduate students using a variety of self-report assessments, and examined the within-person relationships between task-specific perceptions (expectancy, utility, and difficulty) and interest and boredom. This study further explored the role of achievement goals in predicting between-person differences in these within-person relationships. Utilizing hierarchical-linear modeling, we found that, on average, a higher perception of both expectancy and utility, as well as a lower perception of difficulty, was associated with higher interest and lower boredom levels within individuals. Moreover, mastery-approach goals weakened the negative within-person relationship between difficulty and interest and the negative within-person relationship between utility and boredom. Mastery-avoidance and performance-avoidance goals strengthened the negative relationship between expectancy and boredom. These results suggest how educators can more effectively instruct students with different types of goals, minimizing boredom and maximizing interest and learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this article is to review the scientific literature on airflow distribution systems and ventilation effectiveness to identify and assess the most suitable room air distribution methods for various spaces. In this study, different ventilation systems are classified according to specific requirements and assessment procedures. This study shows that eight ventilation methods have been employed in the built environment for different purposes and tasks. The investigation shows that numerous studies have been carried out on ventilation effectiveness but few studies have been done regarding other aspects of air distribution. Amongst existing types of ventilation systems, the performance of each ventilation methods varies from one case to another due to different usages of the ventilation system in a room and the different assessment indices used. This review shows that the assessment of ventilation effectiveness or efficiency should be determined according to each task of the ventilation system, such as removal of heat, removal of pollutant, supply fresh air to the breathing zone or protecting the occupant from cross infection. The analysis results form a basic framework regarding the application of airflow distribution for the benefit of designers, architects, engineers, installers and building owners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Daily consumption of Concord grape juice (CGJ) over three to four months has been shown to improve memory function in adults with mild cognitive impairment, and reduce blood pressure in hypertensive adults. These benefits are likely due to the high concentration of polyphenols in CGJ. Increased stress can impair cognitive function and elevate blood pressure. Thus we examined the potential beneficial effect of CGJ in individuals experiencing somewhat stressful demanding lifestyles. Objective: To examine the effects of twelve weeks’ daily consumption of CGJ on cognitive function, driving performance, and blood pressure in healthy, middle-aged working mothers. Design: Twenty five healthy mothers of pre-teen children, aged 40-50 years, who were employed for > 30 hours/week consumed 12oz (355ml) CGJ (containing 777mg total polyphenols) or an energy, taste and appearance matched placebo daily for twelve weeks according to a randomised, crossover design with a four week washout. Verbal and spatial memory, executive function, attention, blood pressure and mood were assessed at baseline, six weeks and twelve weeks. Immediately following the cognitive battery, a subsample of seventeen females completed a driving performance assessment in the University of Leeds Driving Simulator. The twenty five minute driving task required participants to match the speed and direction of a lead vehicle. Results: Significant improvements in immediate spatial memory and driving performance were observed following CGJ relative to placebo. There was evidence of an enduring effect of CGJ such that participants who received CGJ in arm 1 maintained better performance in the placebo arm. Conclusions: Cognitive benefits associated with chronic consumption of flavonoid-rich grape juice are not exclusive to adults with mild cognitive impairment. Moreover, these cognitive benefits are apparent in complex everyday tasks such as driving. Effects may persist beyond cessation of flavonoid consumption and future studies should carefully consider the length of washout within crossover designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer.In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time.We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space.A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Biosim” is a simulation software which works to simulate the harvesting system.This system is able to design a model for any logistic problem with the combination of several objects so that the artificial system can show the performance of an individual model. The system will also describe the efficiency, possibility to be chosen for real life application of that particular model. So, when any one wish to setup a logistic model like- harvesting system, in real life he/she may be noticed about the suitable prostitution for his plants and factories as well as he/she may get information about the least number of objects, total time to complete the task, total investment required for his model, total amount of noise produced for his establishment in advance. It will produce an advance over view for his model. But “Biosim” is quite slow .As it is an object based system, it takes long time to make its decision. Here the main task is to modify the system so that it can work faster than the previous. So, the main objective of this thesis is to reduce the load of “Biosim” by making some modification of the original system as well as to increase its efficiency. So that the whole system will be faster than the previous one and performs more efficiently when it will be applied in real life. Theconcept is to separate the execution part of ”Biosim” form its graphical engine and run this separated portion in a third generation language platform. C++ is chosenhere as this external platform. After completing the proposed system, results with different models have been observed. The results show that, for any type of plants of fields, for any number of trucks, the proposed system is faster than the original system. The proposed system takes at least 15% less time “Biosim”. The efficiency increase with the complexity of than the original the model. More complex the model, more efficient the proposed system is than original “Biosim”.Depending on the complexity of a model, the proposed system can be 56.53 % faster than the original “Biosim”.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Participation as observer at the meeting of Task 14 of IEA's Solar Heating and Cooling Projects held in Hameln, Germany has led to greater understanding of interesting developments underway in several countries. This will be of use during the development of small scale systems suitable for Swedish conditions. A summary of the work carried out by the working groups within Task 14 is given, with emphasis on the Domestic Hot Water group. Experiences of low-flow systems from several countries are related, and the conclusion is drawn that the maximum theoretical possible increase in performance of 20% has not been achieved due to poor heat exchangers and poor stratification in the storage tanks. Positive developments in connecting tubes and pumps is noted. Further participation as observer in Task 14 meetings is desired, and is looked on favourably by the members of the group. Another conclusion is that SERC should carry on with work on Swedish storage tanks, with emphasis on better stratification and heat exchangers, and possible modelling of system components. Finally a German Do-it-Vourself kit is described and judged in comparison with prefabricated models and Swedish Do-it-Yourself kits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service discovery is a critical task in service-oriented architectures such as the Grid and Web Services. In this paper, we study a semantics enabled service registry, GRIMOIRES, from a performance perspective. GRIMOIRES is designed to be the registry for myGrid and the OMII software distribution. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service discovery is a critical task in service-oriented architectures such as the Grid and Web Services. In this paper, we study a semantics enabled service registry, GRIMOIRES, from a performance perspective. GRIMOIRES is designed to be the registry for myGrid and the OMII software distribution. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CMS is a general purpose experiment, designed to study the physics of pp collisions at 14 TeV at the Large Hadron Collider ( LHC). It currently involves more than 2000 physicists from more than 150 institutes and 37 countries. The LHC will provide extraordinary opportunities for particle physics based on its unprecedented collision energy and luminosity when it begins operation in 2007. The principal aim of this report is to present the strategy of CMS to explore the rich physics programme offered by the LHC. This volume demonstrates the physics capability of the CMS experiment. The prime goals of CMS are to explore physics at the TeV scale and to study the mechanism of electroweak symmetry breaking - through the discovery of the Higgs particle or otherwise. To carry out this task, CMS must be prepared to search for new particles, such as the Higgs boson or supersymmetric partners of the Standard Model particles, from the start- up of the LHC since new physics at the TeV scale may manifest itself with modest data samples of the order of a few fb(-1) or less. The analysis tools that have been developed are applied to study in great detail and with all the methodology of performing an analysis on CMS data specific benchmark processes upon which to gauge the performance of CMS. These processes cover several Higgs boson decay channels, the production and decay of new particles such as Z' and supersymmetric particles, B-s production and processes in heavy ion collisions. The simulation of these benchmark processes includes subtle effects such as possible detector miscalibration and misalignment. Besides these benchmark processes, the physics reach of CMS is studied for a large number of signatures arising in the Standard Model and also in theories beyond the Standard Model for integrated luminosities ranging from 1 fb(-1) to 30 fb(-1). The Standard Model processes include QCD, B-physics, diffraction, detailed studies of the top quark properties, and electroweak physics topics such as the W and Z(0) boson properties. The production and decay of the Higgs particle is studied for many observable decays, and the precision with which the Higgs boson properties can be derived is determined. About ten different supersymmetry benchmark points are analysed using full simulation. The CMS discovery reach is evaluated in the SUSY parameter space covering a large variety of decay signatures. Furthermore, the discovery reach for a plethora of alternative models for new physics is explored, notably extra dimensions, new vector boson high mass states, little Higgs models, technicolour and others. Methods to discriminate between models have been investigated. This report is organized as follows. Chapter 1, the Introduction, describes the context of this document. Chapters 2-6 describe examples of full analyses, with photons, electrons, muons, jets, missing E-T, B-mesons and tau's, and for quarkonia in heavy ion collisions. Chapters 7-15 describe the physics reach for Standard Model processes, Higgs discovery and searches for new physics beyond the Standard Model.