968 resultados para Implicit calibration
Resumo:
There has been significant interest in indirect measures of attitudes like the Implicit Association Test (IAT), presumably because of the possibility of uncovering implicit prejudices. The authors derived a set of qualitative predictions for people's performance in the IAT on the basis of random walk models. These were supported in 3 experiments comparing clearly positive or negative categories to nonwords. They also provided evidence that participants shift their response criterion when doing the IAT. Because of these criterion shifts, a response pattern in the IAT can have multiple causes. Thus, it is not possible to infer a single cause (such as prejudice) from IAT results. A surprising additional result was that nonwords were treated as though they were evaluated more negatively than obviously negative items like insects, suggesting that low familiarity items may generate the pattern of data previously interpreted as evidence for implicit prejudice.
Resumo:
This section presents abstracts of three studies on how consumer choices can be influenced by the name letter effect of brands without decision makers being aware of this influence. The first paper examined whether making brand names similar to consumers' names increases the likelihood that consumers will choose the brand. One prediction is that people will prefer and be more likely to choose products or services whose names prominently feature the letters in their own first or last names. The results showed that subjects' preference rankings and evaluations of name letter matching brands were higher than those of non-name letter matching brands. The second paper tested the possibility of using subliminal priming to activate a concept that a persuasive communicator could take advantage of. To examine the idea, two experiments were presented. In the first experiment, participants' level of thirst were manipulated and then subliminally presented them with either thirst-related words or control words. While the manipulations had no effect on participants' self-reported, conscious ratings of thirst, there was a significant interactive effect of the two factors on how much of the drink provided in the taste test was consumed. In a second, follow up experiment, thirsty participants were subliminally presented with either thirst-related words or control words after which they viewed advertisements for two new sports beverages. In conclusion, the research demonstrates that under certain conditions, subliminal printing techniques can enhance persuasion. The third paper hypothesized that the lack of correlations between implicit and explicit evaluations is due to measurement error.
Resumo:
The purpose of this study was to investigate the role of the fronto–striatal system for implicit task sequence learning. We tested performance of patients with compromised functioning of the fronto–striatal loops, that is, patients with Parkinson's disease and patients with lesions in the ventromedial or dorsolateral prefrontal cortex. We also tested amnesic patients with lesions either to the basal forebrain/orbitofrontal cortex or to thalamic/medio-temporal regions. We used a task sequence learning paradigm involving the presentation of a sequence of categorical binary-choice decision tasks. After several blocks of training, the sequence, hidden in the order of tasks, was replaced by a pseudo-random sequence. Learning (i.e., sensitivity to the ordering) was assessed by measuring whether this change disrupted performance. Although all the patients were able to perform the decision tasks quite easily, those with lesions to the fronto–striatal loops (i.e., patients with Parkinson's disease, with lesions in the ventromedial or dorsolateral prefrontal cortex and those amnesic patients with lesions to the basal forebrain/orbitofrontal cortex) did not show any evidence of implicit task sequence learning. In contrast, those amnesic patients with lesions to thalamic/medio-temporal regions showed intact sequence learning. Together, these results indicate that the integrity of the fronto–striatal system is a prerequisite for implicit task sequence learning.
Resumo:
Implicit task sequence learning (TSL) can be considered as an extension of implicit sequence learning which is typically tested with the classical serial reaction time task (SRTT). By design, in the SRTT there is a correlation between the sequence of stimuli to which participants must attend and the sequence of motor movements/key presses with which participants must respond. The TSL paradigm allows to disentangle this correlation and to separately manipulate the presences/absence of a sequence of tasks, a sequence of responses, and even other streams of information such as stimulus locations or stimulus-response mappings. Here I review the state of TSL research which seems to point at the critical role of the presence of correlated streams of information in implicit sequence learning. On a more general level, I propose that beyond correlated streams of information, a simple statistical learning mechanism may also be involved in implicit sequence learning, and that the relative contribution of these two explanations differ according to task requirements. With this differentiation, conflicting results can be integrated into a coherent framework.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
The match time spent on court in racquet sports can be perceived as dependent on the effort an athlete is willing to exert in a competition. Achievement motivation is defined as the effort a person spends on a difficult task with the completion of which she wants to meet a personal standard of excellence, wants to improve herself, or outperform others (McClelland, Atkinson, Clark, & Lowell, 1953). Fifty-two professionals of three racquet sports (tennis, table tennis, and badminton) filled in a questionnaire on their explicit achievement motive, a scale on general life stress, and a measure of the implicit achievement motive. Results indicate that the implicit but not the explicit achievement motive was able to predict the athletes' time spent on court (effort). Additionally the general life stress scale was negatively related to time spent on court. Findings are in line with theoretical assumptions that actual behavior is linked to the implicit achievement motive and that higher levels of general life stress lead to impaired performance in sports.
Resumo:
Motivational research over the past decade has provided ample evidence for the existence of two distinct motivational systems. Implicit motives are affect-based needs and have been found to predict spontaneous behavioral trends over time. Explicit motives in contrast represent cognitively based self-attributes and are preferably linked to choices. The present research examines the differentiating and predictive value of the implicit vs. explicit achievement motives for team sports performances. German students (N = 42) completed a measure of the implicit (Operant Motive Test) and the explicit achievement motive (Achievement Motive Scale-Sport). Choosing a goal distance is significantly predicted by the explicit achievement motive measure. By contrast, repeated performances in a team tournament are significantly predicted by the indirect measure. Results are in line with findings showing that implicit and explicit motive measures are associated with different classes of behavior.