979 resultados para Eisenia andrei Bouché
Resumo:
Answering run-time questions in object-oriented systems involves reasoning about and exploring connections between multiple objects. Developer questions exercise various aspects of an object and require multiple kinds of interactions depending on the relationships between objects, the application domain and the differing developer needs. Nevertheless, traditional object inspectors, the essential tools often used to reason about objects, favor a generic view that focuses on the low-level details of the state of individual objects. This leads to an inefficient effort, increasing the time spent in the inspector. To improve the inspection process, we propose the Moldable Inspector, a novel approach for an extensible object inspector. The Moldable Inspector allows developers to look at objects using multiple interchangeable presentations and supports a workflow in which multiple levels of connecting objects can be seen together. Both these aspects can be tailored to the domain of the objects and the question at hand. We further exemplify how the proposed solution improves the inspection process, introduce a prototype implementation and discuss new directions for extending the Moldable Inspector.
Resumo:
Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.
Resumo:
Ice cores provide a robust reconstruction of past climate. However, development of timescales by annual-layer counting, essential to detailed climate reconstruction and interpretation, on ice cores collected at low-accumulation sites or in regions of compressed ice, is problematic due to closely spaced layers. Ice-core analysis by laser ablation–inductively coupled plasma–mass spectrometry (LA-ICP-MS) provides sub-millimeter-scale sampling resolution (on the order of 100μm in this study) and the low detection limits (ng L–1) necessary to measure the chemical constituents preserved in ice cores. We present a newly developed cryocell that can hold a 1m long section of ice core, and an alternative strategy for calibration. Using ice-core samples from central Greenland, we demonstrate the repeatability of multiple ablation passes, highlight the improved sampling resolution, verify the calibration technique and identify annual layers in the chemical profile in a deep section of an ice core where annual layers have not previously been identified using chemistry. In addition, using sections of cores from the Swiss/Italian Alps we illustrate the relationship between Ca, Na and Fe and particle concentration and conductivity, and validate the LA-ICP-MS Ca profile through a direct comparison with continuous flow analysis results.
Resumo:
Introduction Language is the most important mean of communication and plays a central role in our everyday life. Brain damage (e.g. stroke) can lead to acquired disorders of lan- guage affecting the four linguistic modalities (i.e. reading, writing, speech production and comprehension) in different combinations and levels of severity. Every year, more than 5000 people (Aphasie Suisse) are affected by aphasia in Switzerland alone. Since aphasia is highly individual, the level of difficulty and the content of tasks have to be adapted continuously by the speech therapists. Computer-based assignments allow patients to train independently at home and thus increasing the frequency of ther- apy. Recent developments in tablet computers have opened new opportunities to use these devices for rehabilitation purposes. Especially older people, who have no prior experience with computers, can benefit from the new technologies. Methods The aim of this project was to develop an application that enables patients to train language related tasks autonomously and, on the other hand, allows speech therapists to assign exercises to the patients and to track their results online. Seven categories with various types of assignments were implemented. The application has two parts which are separated by a user management system into a patient interface and a therapist interface. Both interfaces were evaluated using the SUS (Subject Usability Scale). The patient interface was tested by 15 healthy controls and 5 patients. For the patients, we also collected tracking data for further analysis. The therapist interface was evaluated by 5 speech therapists. Results The SUS score are xpatients = 98 and xhealthy = 92.7 (median = 95, SD = 7, 95% CI [88.8, 96.6]) in case of the patient interface and xtherapists = 68 in case of the therapist interface. Conclusion Both, the patients and the healthy subjects, attested high SUS scores to the patient interface. These scores are considered as "best imaginable". The therapist interface got a lower SUS score compared to the patient interface, but is still considered as "good" and "usable". The user tracking system and the interviews revealed that there is room for improvements and inspired new ideas for future versions.
Resumo:
The usual Skolemization procedure, which removes strong quantifiers by introducing new function symbols, is in general unsound for first-order substructural logics defined based on classes of complete residuated lattices. However, it is shown here (following similar ideas of Baaz and Iemhoff for first-order intermediate logics in [1]) that first-order substructural logics with a semantics satisfying certain witnessing conditions admit a “parallel” Skolemization procedure where a strong quantifier is removed by introducing a finite disjunction or conjunction (as appropriate) of formulas with multiple new function symbols. These logics typically lack equivalent prenex forms. Also, semantic consequence does not in general reduce to satisfiability. The Skolemization theorems presented here therefore take various forms, applying to the left or right of the consequence relation, and to all formulas or only prenex formulas.
Resumo:
Computer games for a serious purpose - so called serious games can provide additional information for the screening and diagnosis of cognitive impairment. Moreover, they have the advantage of being an ecological tool by involving daily living tasks. However, there is a need for better comprehensive designs regarding the acceptance of this technology, as the target population is older adults that are not used to interact with novel technologies. Moreover given the complexity of the diagnosis and the need for precise assessment, an evaluation of the best approach to analyze the performance data is required. The present study examines the usability of a new screening tool and proposes several new outlines for data analysis.
Resumo:
BACKGROUND AND PURPOSE The posterior circulation Acute Stroke Prognosis Early CT Score (pc-APECTS) applied to CT angiography source images (CTA-SI) predicts the functional outcome of patients in the Basilar Artery International Cooperation Study (BASICS). We assessed the diagnostic and prognostic impact of pc-ASPECTS applied to perfusion CT (CTP) in the BASICS registry population. METHODS We applied pc-ASPECTS to CTA-SI and cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT) parameter maps of BASICS patients with CTA and CTP studies performed. Hypoattenuation on CTA-SI, relative reduction in CBV or CBF, or relative increase in MTT were rated as abnormal. RESULTS CTA and CTP were available in 27/592 BASICS patients (4.6%). The proportion of patients with any perfusion abnormality was highest for MTT (93%; 95% confidence interval [CI], 76%-99%), compared with 78% (58%-91%) for CTA-SI and CBF, and 46% (27%-67%) for CBV (P < .001). All 3 patients with a CBV pc-ASPECTS < 8 compared to 6/23 patients with a CBV pc-ASPECTS ≥ 8 had died at 1 month (RR 3.8; 95% CI, 1.9-7.6). CONCLUSION CTP was performed in a minority of the BASICS registry population. Perfusion disturbances in the posterior circulation were most pronounced on MTT parameter maps. CBV pc-ASPECTS < 8 may indicate patients with high case fatality.
Resumo:
PURPOSE Our main objective was to prospectively determine the prognostic value of [(18)F]fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) after two cycles of rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone given every 14 days (R-CHOP-14) under standardized treatment and PET evaluation criteria. PATIENTS AND METHODS Patients with any stage of diffuse large B-cell lymphoma were treated with six cycles of R-CHOP-14 followed by two cycles of rituximab. PET/CT examinations were performed at baseline, after two cycles (and after four cycles if the patient was PET-positive after two cycles), and at the end of treatment. PET/CT examinations were evaluated locally and by central review. The primary end point was event-free survival at 2 years (2-year EFS). RESULTS Median age of the 138 evaluable patients was 58.5 years with a WHO performance status of 0, 1, or 2 in 56%, 36%, or 8% of the patients, respectively. By local assessment, 83 PET/CT scans (60%) were reported as positive and 55 (40%) as negative after two cycles of R-CHOP-14. Two-year EFS was significantly shorter for PET-positive compared with PET-negative patients (48% v 74%; P = .004). Overall survival at 2 years was not significantly different, with 88% for PET-positive versus 91% for PET-negative patients (P = .46). By using central review and the Deauville criteria, 2-year EFS was 41% versus 76% (P < .001) for patients who had interim PET/CT scans after two cycles of R-CHOP-14 and 24% versus 72% (P < .001) for patients who had PET/CT scans at the end of treatment. CONCLUSION Our results confirmed that an interim PET/CT scan has limited prognostic value in patients with diffuse large B-cell lymphoma homogeneously treated with six cycles of R-CHOP-14 in a large prospective trial. At this point, interim PET/CT scanning is not ready for clinical use to guide treatment decisions in individual patients.
Resumo:
PURPOSE To investigate the feasibility of MR diffusion tensor imaging (DTI) of the median nerve using simultaneous multi-slice echo planar imaging (EPI) with blipped CAIPIRINHA. MATERIALS AND METHODS After federal ethics board approval, MR imaging of the median nerves of eight healthy volunteers (mean age, 29.4 years; range, 25-32) was performed at 3 T using a 16-channel hand/wrist coil. An EPI sequence (b-value, 1,000 s/mm(2); 20 gradient directions) was acquired without acceleration as well as with twofold and threefold slice acceleration. Fractional anisotropy (FA), mean diffusivity (MD) and quality of nerve tractography (number of tracks, average track length, track homogeneity, anatomical accuracy) were compared between the acquisitions using multivariate ANOVA and the Kruskal-Wallis test. RESULTS Acquisition time was 6:08 min for standard DTI, 3:38 min for twofold and 2:31 min for threefold acceleration. No differences were found regarding FA (standard DTI: 0.620 ± 0.058; twofold acceleration: 0.642 ± 0.058; threefold acceleration: 0.644 ± 0.061; p ≥ 0.217) and MD (standard DTI: 1.076 ± 0.080 mm(2)/s; twofold acceleration: 1.016 ± 0.123 mm(2)/s; threefold acceleration: 0.979 ± 0.153 mm(2)/s; p ≥ 0.074). Twofold acceleration yielded similar tractography quality compared to standard DTI (p > 0.05). With threefold acceleration, however, average track length and track homogeneity decreased (p = 0.004-0.021). CONCLUSION Accelerated DTI of the median nerve is feasible. Twofold acceleration yields similar results to standard DTI. KEY POINTS • Standard DTI of the median nerve is limited by its long acquisition time. • Simultaneous multi-slice acquisition is a new technique for accelerated DTI. • Accelerated DTI of the median nerve yields similar results to standard DTI.
Resumo:
The lexical items like and well can serve as discourse markers (DMs), but can also play numerous other roles, such as verb or adverb. Identifying the occurrences that function as DMs is an important step for language understanding by computers. In this study, automatic classifiers using lexical, prosodic/positional and sociolinguistic features are trained over transcribed dialogues, manually annotated with DM information. The resulting classifiers improve state-of-the-art performance of DM identification, at about 90% recall and 79% precision for like (84.5% accuracy, κ = 0.69), and 99% recall and 98% precision for well (97.5% accuracy, κ = 0.88). Automatic feature analysis shows that lexical collocations are the most reliable indicators, followed by prosodic/positional features, while sociolinguistic features are marginally useful for the identification of DM like and not useful for well. The differentiated processing of each type of DM improves classification accuracy, suggesting that these types should be treated individually.
Resumo:
This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.
Resumo:
In this paper, we question the homogeneity of a large parallel corpus by measuring the similarity between various sub-parts. We compare results obtained using a general measure of lexical similarity based on χ2 and by counting the number of discourse connectives. We argue that discourse connectives provide a more sensitive measure, revealing differences that are not visible with the general measure. We also provide evidence for the existence of specific characteristics defining translated texts as opposed to non-translated ones, due to a universal tendency for explicitation.