171 resultados para Process-dissociation Framework
em University of Queensland eSpace - Australia
Resumo:
Current theoretical thinking about dual processes in recognition relies heavily on the measurement operations embodied within the process dissociation procedure. We critically evaluate the ability of this procedure to support this theoretical enterprise. We show that there are alternative processes that would produce a rough invariance in familiarity (a key prediction of the dual-processing approach) and that the process dissociation procedure does not have the power to differentiate between these alternative possibilities. We also show that attempts to relate parameters estimated by the process dissociation procedure to subjective reports (remember-know judgments) cannot differentiate between alternative dual-processing models and that there are problems with some of the historical evidence and with obtaining converging evidence. Our conclusion is that more specific theories incorporating ideas about representation and process are required.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length, temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches.
Resumo:
Following study, participants received 2 tests. The 1st was a recognition test; the 2nd was designed to tap recollection. The objective was to examine performance on Test I conditional on Test 2 performance. In Experiment 1, contrary to process dissociation assumptions, exclusion errors better predicted subsequent recollection than did inclusion errors. In Experiments 2 and 3, with alternate questions posed on Test 2, words having high estimates of recollection with one question had high estimates of familiarity with the other question. Results supported the following: (a) the 2-test procedure has considerable potential for elucidating the relationship between recollection and familiarity; (b) there is substantial evidence for dependency between such processes when estimates are obtained using the process dissociation and remember-know procedures; and (c) order of information access appears to depend on the question posed to the memory system.
Resumo:
Two studies investigated the context deletion effect, the attenuation of priming in implicit memory tests of words when words have been studied in text rather than in isolation. In Experiment 1, stem completion for single words was primed to a greater extent by words studied alone than in sentence contexts, and a higher proportion of completions from studied words was produced under direct instructions (cued recall) than under indirect instructions (produce the first completion that comes to mind). The effect of a sentence context was eliminated when participants were instructed to attend to the target word during the imagery generation task used in the study phase. In Experiment 2, the effect of a sentence context at study was reduced when the target word was presented in distinctive format within the sentence, and the study task (grammatical judgment) was directed at a word other than the target. The results implicate conceptual and perceptual processes that distinguish a word from its context in priming in word stem completion.
Resumo:
Frequency, recency, and type of prior exposure to very low-and high-frequency words were manipulated in a 3-phase (i.e., familiarization training, study, and test) design. Increasing the frequency with which a definition for a very low-frequency word was provided during familiarization facilitated the word's recognition in both yes-no (Experiment 1) and forced-choice paradigms (Experiment 2). Recognition of very low-frequency words not accompanied by a definition during familiarization first increased, then decreased as familiarization frequency increased (Experiment I). Reasons for these differences were investigated in Experiment 3 using judgments of recency and frequency. Results suggested that prior familiarization of a very low-frequency word with its definition may allow a more adequate episodic representation of the word to be formed during a subsequent study trial. Theoretical implications of these results for current models of memory are discussed.
Resumo:
Event-related potentials (ERPs) were recorded while subjects made old/new recognition judgments on new unstudied words and old words which had been presented at study either once ('weak') or three times ('strong'). The probability of an 'old' response was significantly higher for strong than weak words and significantly higher for weak than new words. Comparisons were made initially between ERPs to new, weak and strong words, and subsequently between ERPs associated with six strength-by-response conditions. The N400 component was found to be modulated by memory trace strength in a graded manner. Its amplitude was most negative in new word ERPs and most positive in strong word ERPs. This 'N400 strength effect' was largest at the left parietal electrode (in ear-referenced ERPs). The amplitude of the late positive complex (LPC) effect was sensitive to decision accuracy (and perhaps confidence). Its amplitude was larger in ERPs evoked by words attracting correct versus incorrect recognition decisions. The LPC effect had a left > right, centro-parietal scalp topography (in ear-referenced ERPs). Hence, whereas, the majority of previous ERP studies of episodic recognition have interpreted results from the perspective of dual-process models, we provide alternative interpretations of N400 and LPC old/new effects in terms of memory strength and decisional factor(s). (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
Three experiments are reported that examined the process by which trainees learn decision-making skills during a critical incident training program. Formal theories of category learning were used to identify two processes that may be responsible for the acquisition of decision-making skills: rule learning and exemplar learning. Experiments I and 2 used the process dissociation procedure (L. L. Jacoby, 1998) to evaluate the contribution of these processes to performance. The results suggest that trainees used a mixture of rule and exemplar learning. Furthermore, these learning processes were influenced by different aspects of training structure and design. The goal of Experiment 3 was to develop training techniques that enable trainees to use a rule adaptively. Trainees were tested on cases that represented exceptions to the rule. Unexpectedly, the results suggest that providing general instruction regarding the kinds of conditions in which a decision rule does not apply caused them to fixate on the specific conditions mentioned and impaired their ability to identify other conditions in which the rule might not apply. The theoretical, methodological, and practical implications of the results are discussed.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
As process management projects have increased in size due to globalised and company-wide initiatives, a corresponding growth in the size of process modeling projects can be observed. Despite advances in languages, tools and methodologies, several aspects of these projects have been largely ignored by the academic community. This paper makes a first contribution to a potential research agenda in this field by defining the characteristics of large-scale process modeling projects and proposing a framework of related issues. These issues are derived from a semi -structured interview and six focus groups conducted in Australia, Germany and the USA with enterprise and modeling software vendors and customers. The focus groups confirm the existence of unresolved problems in business process modeling projects. The outcomes provide a research agenda which directs researchers into further studies in global process management, process model decomposition and the overall governance of process modeling projects. It is expected that this research agenda will provide guidance to researchers and practitioners by focusing on areas of high theoretical and practical relevance.
Resumo:
Community awareness of the sustainable use of land, water and vegetation resources is increasing. The sustainable use of these resources is pivotal to sustainable farming systems. However, techniques for monitoring the sustainable management of these resources are poorly understood and untested. We propose a framework to benchmark and monitor resources in the grains industry. Eight steps are listed below to achieve these objectives: (i) define industry issues; (ii) identify the issues through growers, stakeholder and community consultation; (iii) identify indicators (measurable attributes, properties or characteristics) of sustainability through consultation with growers, stakeholders, experts and community members, relating to: crop productivity; resource maintenance/enhancement; biodiversity; economic viability; community viability; and institutional structure; (iv) develop and use selection criteria to select indicators that consider: responsiveness to change; ease of capture; community acceptance and involvement; interpretation; measurement error; stability, frequency and cost of measurement; spatial scale issues; and mapping capability in space and through time. The appropriateness of indicators can be evaluated using a decision making system such as a multiobjective decision support system (MO-DSS, a method to assist in decision making from multiple and conflicting objectives); (v) involve stakeholders and the community in the definition of goals and setting benchmarking and monitoring targets for sustainable farming; (vi) take preventive and corrective/remedial action; (vii) evaluate effectiveness of actions taken; and (viii) revise indicators as part of a continual improvement principle designed to achieve best management practice for sustainable farming systems. The major recommendations are to: (i) implement the framework for resources (land, water and vegetation, economic, community and institution) benchmarking and monitoring, and integrate this process with current activities so that awareness, implementation and evolution of sustainable resource management practices become normal practice in the grains industry; (ii) empower the grains industry to take the lead by using relevant sustainability indicators to benchmark and monitor resources; (iii) adopt a collaborative approach by involving various industry, community, catchment management and government agency groups to minimise implementation time. Monitoring programs such as Waterwatch, Soilcheck, Grasscheck and Topcrop should be utilised; (iv) encourage the adoption of a decision making system by growers and industry representatives as a participatory decision and evaluation process. Widespread use of sustainability indicators would assist in validating and refining these indicators and evaluating sustainable farming systems. The indicators could also assist in evaluating best management practices for the grains industry.
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Resumo:
This paper develops a general framework for valuing a wide range of derivative securities. Rather than focusing on the stochastic process of the underlying security and developing an instantaneously-riskless hedge portfolio, we focus on the terminal distribution of the underlying security. This enables the derivative security to be valued as the weighted sum of a number of component pieces. The component pieces are simply the different payoffs that the security generates in different states of the world, and they are weighted by the probability of the particular state of the world occurring. A full set of derivations is provided. To illustrate its use, the valuation framework is applied to plain-vanilla call and put options, as well as a range of derivatives including caps, floors, collars, supershares, and digital options.
Resumo:
Information processing accounts propose that autonomic orienting reflects the amount of resources allocated to process a stimulus. However, secondary task reaction time (RT), a supposed measure of processing resources, has shown a dissociation from autonomic orienting. The present study tested the hypothesis that secondary task RT reflects a serial processing mechanism. Participants (N = 24) were presented with circle and ellipse shapes and asked to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of a second shape (task-irrelevant). Concurrent with the counting task, participants performed a secondary RT task to an auditory probe presented at either a high or low intensity and at two different probe positions following shape onset (50 and 300 ms). Electrodermal orienting was larger during task-relevant shapes than during task-irrelevant shapes, but secondary task RT to the high-intensity probe was slower during the latter. In addition, an underadditive interaction between probe stimulus intensity and probe position was found in secondary RT. The findings are consistent with a serial processing model of secondary RT and suggest that the notion of processing stages should be incorporated into current information-processing models of autonomic orienting.