957 resultados para Lexical decision task
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
Prediction of clinical outcome in cancer is usually achieved by histopathological evaluation of tissue samples obtained during surgical resection of the primary tumor. Traditional tumor staging (AJCC/UICC-TNM classification) summarizes data on tumor burden (T), presence of cancer cells in draining and regional lymph nodes (N) and evidence for metastases (M). However, it is now recognized that clinical outcome can significantly vary among patients within the same stage. The current classification provides limited prognostic information, and does not predict response to therapy. Recent literature has alluded to the importance of the host immune system in controlling tumor progression. Thus, evidence supports the notion to include immunological biomarkers, implemented as a tool for the prediction of prognosis and response to therapy. Accumulating data, collected from large cohorts of human cancers, has demonstrated the impact of immune-classification, which has a prognostic value that may add to the significance of the AJCC/UICC TNM-classification. It is therefore imperative to begin to incorporate the 'Immunoscore' into traditional classification, thus providing an essential prognostic and potentially predictive tool. Introduction of this parameter as a biomarker to classify cancers, as part of routine diagnostic and prognostic assessment of tumors, will facilitate clinical decision-making including rational stratification of patient treatment. Equally, the inherent complexity of quantitative immunohistochemistry, in conjunction with protocol variation across laboratories, analysis of different immune cell types, inconsistent region selection criteria, and variable ways to quantify immune infiltration, all underline the urgent requirement to reach assay harmonization. In an effort to promote the Immunoscore in routine clinical settings, an international task force was initiated. This review represents a follow-up of the announcement of this initiative, and of the J Transl Med. editorial from January 2012. Immunophenotyping of tumors may provide crucial novel prognostic information. The results of this international validation may result in the implementation of the Immunoscore as a new component for the classification of cancer, designated TNM-I (TNM-Immune).
Resumo:
This series of studies is the first to use conjoint analysis to examine how individuals make trade-offs during mate selection when provided information about a partner's history of sexual infidelity. Across three studies, participants ranked profiles of potential mates, with each profile varying across five attributes: financial stability, physical attractiveness, sexual fidelity, emotional investment, and similarity. They also rated each attribute separately for importance in an ideal mate. Overall, we found that for a long-term mate, participants prioritized a potential partner's history of sexual fidelity over other attributes when profiles were ranked conjointly. For a short-term mate, sexual fidelity, physical attractiveness, and financial stability were equally important, and each was more important than emotional investment and similarity. These patterns contrast with participants' self-reported importance ratings of each individual attribute. Our results are interpreted within the context of previous literature examining how making trade-offs affect mate selection.
Resumo:
Previous work has reported that in the Iowa gambling task (IGT) advantageous decisions may be taken before the advantageous strategy is known [Bechara, A., Damasio, H., Tranel, D., ; Damasio, A. R. (1997). Deciding advantageously before knowing the advantageous strategy. Science, 275, 1293-1295]. In order to test whether explicit memory is essential for the acquisition of a behavioural preference for advantageous choices, we measured behavioural performance and skin conductance responses (SCRs) in five patients with dense amnesia following damage to the basal forebrain and orbitofrontal cortex, six amnesic patients with damage to the medial temporal lobe or the diencephalon, and eight control subjects performing the IGT. Across 100 trials healthy participants acquired a preference for advantageous choices and generated large SCRs to high levels of punishment. In addition, their anticipatory SCRs to disadvantageous choices were larger than to advantageous choices. However, this dissociation occurred much later than the behavioural preference for advantageous alternatives. In contrast, though exhibiting discriminatory autonomic SCRs to different levels of punishment, 9 of 11 amnesic patients performed at chance and did not show differential anticipatory SCRs to advantageous and disadvantageous choices. Further, the magnitude of anticipatory SCRs did not correlate with behavioural performance. These results suggest that the acquisition of a behavioural preference--be it for advantageous or disadvantageous choices--depends on the memory of previous reinforcements encountered in the task, a capacity requiring intact explicit memory.
Resumo:
Situationally adaptive behavior relies on the identification of relevant target stimuli, the evaluation of these with respect to the current context and the selection of an appropriate action. We used functional magnetic resonance imaging (fMRI) to disentangle the neural networks underlying these processes within a single task. Our results show that activation of mid-ventrolateral prefrontal cortex (PFC) reflects the perceived presence of a target stimulus regardless of context, whereas context-appropriate evaluation is subserved by mid-dorsolateral PFC. Enhancing demands on response selection by means of response conflict activated a network of regions, all of which are directly connected to motor areas. On the midline, rostral anterior paracingulate cortex was found to link target detection and response selection by monitoring for the presence of behaviorally significant conditions. In summary, we provide new evidence for process-specific functional dissociations in the frontal lobes. In target-centered processing, target detection in the VLPFC is separable from contextual evaluation in the DLPFC. Response-centered processing in motor-associated regions occurs partly in parallel to these processes, which may enhance behavioral efficiency, but it may also lead to reaction time increases when an irrelevant response tendency is elicited.
Resumo:
Studies with chronic schizophrenia patients have demonstrated that patients fluctuate between rigid and unpredictable responses in decision-making situations, a phenomenon which has been called dysregulation. The aim of this study was to investigate whether schizophrenia patients already display dysregulated behavior at the beginning of their illness. Thirty-two first-episode schizophrenia or schizophreniform patients and 30 healthy controls performed the two-choice prediction task. The decision-making behavior of first-episode patients was shown to be characterized by a high degree of dysregulation accompanied by low metric entropy and a tendency towards increased mutual information. These results indicate that behavioral abnormalities during the two-choice prediction task are already present during the early stages of the illness.
Resumo:
During the project, managers encounter numerous contingencies and are faced with the challenging task of making decisions that will effectively keep the project on track. This task is very challenging because construction projects are non-prototypical and the processes are irreversible. Therefore, it is critical to apply a methodological approach to develop a few alternative management decision strategies during the planning phase, which can be deployed to manage alternative scenarios resulting from expected and unexpected disruptions in the as-planned schedule. Such a methodology should have the following features but are missing in the existing research: (1) looking at the effects of local decisions on the global project outcomes, (2) studying how a schedule responds to decisions and disruptive events because the risk in a schedule is a function of the decisions made, (3) establishing a method to assess and improve the management decision strategies, and (4) developing project specific decision strategies because each construction project is unique and the lessons from a particular project cannot be easily applied to projects that have different contexts. The objective of this dissertation is to develop a schedule-based simulation framework to design, assess, and improve sequences of decisions for the execution stage. The contribution of this research is the introduction of applying decision strategies to manage a project and the establishment of iterative methodology to continuously assess and improve decision strategies and schedules. The project managers or schedulers can implement the methodology to develop and identify schedules accompanied by suitable decision strategies to manage a project at the planning stage. The developed methodology also lays the foundation for an algorithm towards continuously automatically generating satisfactory schedule and strategies through the construction life of a project. Different from studying isolated daily decisions, the proposed framework introduces the notion of {em decision strategies} to manage construction process. A decision strategy is a sequence of interdependent decisions determined by resource allocation policies such as labor, material, equipment, and space policies. The schedule-based simulation framework consists of two parts, experiment design and result assessment. The core of the experiment design is the establishment of an iterative method to test and improve decision strategies and schedules, which is based on the introduction of decision strategies and the development of a schedule-based simulation testbed. The simulation testbed used is Interactive Construction Decision Making Aid (ICDMA). ICDMA has an emulator to duplicate the construction process that has been previously developed and a random event generator that allows the decision-maker to respond to disruptions in the emulation. It is used to study how the schedule responds to these disruptions and the corresponding decisions made over the duration of the project while accounting for cascading impacts and dependencies between activities. The dissertation is organized into two parts. The first part presents the existing research, identifies the departure points of this work, and develops a schedule-based simulation framework to design, assess, and improve decision strategies. In the second part, the proposed schedule-based simulation framework is applied to investigate specific research problems.
Resumo:
In many complex and dynamic domains, the ability to generate and then select the appropriate course of action is based on the decision maker's "reading" of the situation--in other words, their ability to assess the situation and predict how it will evolve over the next few seconds. Current theories regarding option generation during the situation assessment and response phases of decision making offer contrasting views on the cognitive mechanisms that support superior performance. The Recognition-Primed Decision-making model (RPD; Klein, 1989) and Take-The-First heuristic (TTF; Johnson & Raab, 2003) suggest that superior decisions are made by generating few options, and then selecting the first option as the final one. Long-Term Working Memory theory (LTWM; Ericsson & Kintsch, 1995), on the other hand, posits that skilled decision makers construct rich, detailed situation models, and that as a result, skilled performers should have the ability to generate more of the available task-relevant options. The main goal of this dissertation was to use these theories about option generation as a way to further the understanding of how police officers anticipate a perpetrator's actions, and make decisions about how to respond, during dynamic law enforcement situations. An additional goal was to gather information that can be used, in the future, to design training based on the anticipation skills, decision strategies, and processes of experienced officers. Two studies were conducted to achieve these goals. Study 1 identified video-based law enforcement scenarios that could be used to discriminate between experienced and less-experienced police officers, in terms of their ability to anticipate the outcome. The discriminating scenarios were used as the stimuli in Study 2; 23 experienced and 26 less-experienced police officers observed temporally-occluded versions of the scenarios, and then completed assessment and response option-generation tasks. The results provided mixed support for the nature of option generation in these situations. Consistent with RPD and TTF, participants typically selected the first-generated option as their final one, and did so during both the assessment and response phases of decision making. Consistent with LTWM theory, participants--regardless of experience level--generated more task-relevant assessment options than task-irrelevant options. However, an expected interaction between experience level and option-relevance was not observed. Collectively, the two studies provide a deeper understanding of how police officers make decisions in dynamic situations. The methods developed and employed in the studies can be used to investigate anticipation and decision making in other critical domains (e.g., nursing, military). The results are discussed in relation to how they can inform future studies of option-generation performance, and how they could be applied to develop training for law enforcement officers.
Resumo:
BACKGROUND Recommendations from international task forces on geriatric assessment emphasize the need for research including validation of cancer-specific geriatric assessment (C-SGA) tools in oncological settings. The objective of this study was to evaluate the feasibility of the SAKK Cancer-Specific Geriatric Assessment (C-SGA) in clinical practice. METHODS A cross sectional study of cancer patients >=65 years old (N = 51) with pathologically confirmed cancer presenting for initiation of chemotherapy treatment (07/01/2009-03/31/2011) at two oncology departments in Swiss canton hospitals: Kantonsspital Graubunden (KSGR N = 25), Kantonsspital St. Gallen (KSSG N = 26). Data was collected using three instruments, the SAKK C-SGA plus physician and patient evaluation forms. The SAKK C-SGA includes six measures covering five geriatric assessment domains (comorbidity, function, psychosocial, nutrition, cognition) using a mix of medical record abstraction (MRA) and patient interview. Five individual domains and one overall SAKK C-SGA score were calculated and dichotomized as below/above literature-based cut-offs. The SAKK C-SGA was evaluated by: patient and physician estimated time to complete, ease of completing, and difficult or unanswered questions. RESULTS Time to complete the patient questionnaire was considered acceptable by almost all (>=96%) patients and physicians. Patients reported slightly shorter times to complete the questionnaire than physicians (17.33 +/- 7.34 vs. 20.59 +/- 6.53 minutes, p = 0.02). Both groups rated the patient questionnaire as easy/fairly easy to complete (91% vs. 84% respectively, p = 0.14) with few difficult or unanswered questions. The MRA took on average 8.32 +/- 4.72 minutes to complete. Physicians (100%) considered time to complete MRA acceptable, 96% rated it as easy/fairly easy to complete. Individual study site populations differed on health-related characteristics (excellent/good physician-rated general health KSGR 71% vs. KSSG 32%, p = 0.007). The overall mean C-SGA score was 2.4 +/- 1.12. Patients at KSGR had lower C-SGA scores (2.00 +/- 1.19 vs. 2.81 +/- 0.90, p = 0.009) and a smaller proportion (28% vs.65%, p = 0.008) was above the C-SGA cut-off score compared to KSSG. CONCLUSIONS These results suggest the SAKK C-SGA is a feasible practical tool for use in clinical practice. It demonstrated discriminative ability based on objective geriatric assessment measures, but additional investigations on use for clinical decision-making are warranted. The SAKK C-SGA also provides important usable domain information for intervention to optimize outcomes in older cancer patients.
Resumo:
The purpose of this study was to investigate the role of the fronto–striatal system for implicit task sequence learning. We tested performance of patients with compromised functioning of the fronto–striatal loops, that is, patients with Parkinson's disease and patients with lesions in the ventromedial or dorsolateral prefrontal cortex. We also tested amnesic patients with lesions either to the basal forebrain/orbitofrontal cortex or to thalamic/medio-temporal regions. We used a task sequence learning paradigm involving the presentation of a sequence of categorical binary-choice decision tasks. After several blocks of training, the sequence, hidden in the order of tasks, was replaced by a pseudo-random sequence. Learning (i.e., sensitivity to the ordering) was assessed by measuring whether this change disrupted performance. Although all the patients were able to perform the decision tasks quite easily, those with lesions to the fronto–striatal loops (i.e., patients with Parkinson's disease, with lesions in the ventromedial or dorsolateral prefrontal cortex and those amnesic patients with lesions to the basal forebrain/orbitofrontal cortex) did not show any evidence of implicit task sequence learning. In contrast, those amnesic patients with lesions to thalamic/medio-temporal regions showed intact sequence learning. Together, these results indicate that the integrity of the fronto–striatal system is a prerequisite for implicit task sequence learning.
Resumo:
Based on the Attentional Control Theory (ACT; Eysenck et al., 2007), performance efficiency is decreased in high-anxiety situations because worrying thoughts compete for attentional resources. A repeated-measures design (high/low state anxiety and high/low perceptual task demands) was used to test ACT explanations. Complex football situations were displayed to expert and non-expert football players in a decision making task in a controlled laboratory setting. Ratings of state anxiety and pupil diameter measures were used to check anxiety manipulations. Dependent variables were verbal response time and accuracy, mental effort ratings and visual search behavior (e.g., visual search rate). Results confirmed that an anxiety increase, indicated by higher state-anxiety ratings and larger pupil diameters, reduced processing efficiency for both groups (higher response times and mental effort ratings). Moreover, high task demands reduced the ability to shift attention between different locations for the expert group in the high anxiety condition only. Since particularly experts, who were expected to use more top-down strategies to guide visual attention under high perceptual task demands, showed less attentional shifts in the high compared to the low anxiety condition, as predicted by ACT, anxiety seems to impair the shifting function by interrupting the balance between top-down and bottom-up processes.
Resumo:
Recent research demonstrates that response inhibition-a core executive function-may subserve self-regulation and self-control. However, it is unclear whether response inhibition also predicts self-control in the multifaceted, high-level phenomena of social decision-making. Here we examined whether electrophysiological indices of response inhibition would predict self-control in a social context. Electroencephalography was recorded as participants completed a widely used Go/NoGo task (the cued Continuous Performance Test). Participants then interacted with a partner in an economic exchange game that requires self-control. Results demonstrated that greater NoGo-Anteriorization and larger NoGo-P300 peak amplitudes-two established electrophysiological indices of response inhibition-both predicted more self-control in this social game. These findings support continued integration of executive function and self-regulation and help extend prior research into social decision-making processes.
Resumo:
Objectives: Athletes differ at staying focused on performance and avoiding distraction. Drawing on the strength model of self-control we investigated whether athletes do not only differ inter-individually in their disposition of staying focused and avoiding distraction but also intra-individually in their situational availability of focused attention. Design/method: In the present experiment we hypothesized that basketball players (N = 40) who have sufficient self-control resources will perform relatively better on a computer based decision making task under distraction conditions compared to a group who's self-control resources have been depleted in a prior task requiring self-control. Results: The results are in line with the strength model of self-control by demonstrating that an athlete's capability to focus attention relies on the situational availability of self-control strength. Conclusions: The current results indicate that having sufficient self-control strength in interference rich sport settings is likely to be beneficial for decision making.
Resumo:
In this study we investigated whether synesthetic color experiences have similar effects as real colors in cognitive conflict adaptation. We tested 24 synesthetes and two yoke-matched control groups in a task-switching experiment that involved regular switches between three simple decision tasks (a color decision, a form decision, and a size decision). In most of the trials the stimuli were univalent, that is, specific for each task. However, occasionally, black graphemes were presented for the size decisions and we tested whether they would trigger synesthetic color experiences and thus, turn them into bivalent stimuli. The results confirmed this expectation. We were also interested in their effect for subsequent performance (i.e., the bivalency effect). The results showed that for synesthetic colors the bivalency effect was not as pronounced as for real colors. The latter result may be related to differences between synesthetes and controls in coping with color conflict.
Resumo:
Land degradation is intrinsically complex and involves decisions by many agencies and individuals, land degradation map- ping should be used as a learning tool through which managers, experts and stakeholders can re-examine their views within a wider semantic context. In this paper, we introduce an analytical framework for mapping land degradation, developed by World Overview for Conservation Approaches and technologies (WOCAT) programs, which aims to develop some thematic maps that serve as an useful tool and including effective information on land degradation and conservation status. Consequently, this methodology would provide an important background for decision-making in order to launch rehabilitation/remediation actions in high-priority intervention areas. As land degradation mapping is a problem-solving task that aims to provide clear information, this study entails the implementation of WOCAT mapping tool, which integrate a set of indicators to appraise the severity of land degradation across a representative watershed. So this work focuses on the use of the most relevant indicators for measuring impacts of different degradation processes in El Mkhachbiya catchment, situated in Northwest of Tunisia and those actions taken to deal with them based on the analysis of operating modes and issues of degradation in different land use systems. This study aims to provide a database for surveillance and monitoring of land degradation, in order to support stakeholders in making appropriate choices and judge guidelines and possible suitable recommendations to remedy the situation in order to promote sustainable development. The approach is illustrated through a case study of an urban watershed in Northwest of Tunisia. Results showed that the main land degradation drivers in the study area were related to natural processes, which were exacerbated by human activities. So the output of this analytical framework enabled a better communication of land degradation issues and concerns in a way relevant for policymakers.