942 resultados para automated analysis


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives This prospective study investigated the effects of caffeine ingestion on the extent of adenosine-induced perfusion abnormalities during myocardial perfusion imaging (MPI). Methods Thirty patients with inducible perfusion abnormalities on standard (caffeine-abstinent) adenosine MPI underwent repeat testing with supplementary coffee intake. Baseline and test MPIs were assessed for stress percent defect, rest percent defect, and percent defect reversibility. Plasma levels of caffeine and metabolites were assessed on both occasions and correlated with MPI findings. Results Despite significant increases in caffeine [mean difference 3,106 μg/L (95% CI 2,460 to 3,752 μg/L; P < .001)] and metabolite concentrations over a wide range, there was no statistically significant change in stress percent defect and percent defect reversibility between the baseline and test scans. The increase in caffeine concentration between the baseline and the test phases did not affect percent defect reversibility (average change −0.003 for every 100 μg/L increase; 95% CI −0.17 to 0.16; P = .97). Conclusion There was no significant relationship between the extent of adenosine-induced coronary flow heterogeneity and the serum concentration of caffeine or its principal metabolites. Hence, the stringent requirements for prolonged abstinence from caffeine before adenosine MPI—based on limited studies—appear ill-founded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) represents an established method for the detection and diagnosis of breast lesions. While mass-like enhancing lesions can be easily categorized according to the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon, a majority of diagnostically challenging lesions, the so called non-mass-like enhancing lesions, remain both qualitatively as well as quantitatively difficult to analyze. Thus, the evaluation of kinetic and/or morphological characteristics of non-masses represents a challenging task for an automated analysis and is of crucial importance for advancing current computer-aided diagnosis (CAD) systems. Compared to the well-characterized mass-enhancing lesions, non-masses have no well-defined and blurred tumor borders and a kinetic behavior that is not easily generalizable and thus discriminative for malignant and benign non-masses. To overcome these difficulties and pave the way for novel CAD systems for non-masses, we will evaluate several kinetic and morphological descriptors separately and a novel technique, the Zernike velocity moments, to capture the joint spatio-temporal behavior of these lesions, and additionally consider the impact of non-rigid motion compensation on a correct diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measures of icon designs rely heavily on surveys of the perceptions of population samples. Thus, measuring the extent to which changes in the structure of an icon will alter its perceived complexity can be costly and slow. An automated system capable of producing reliable estimates of perceived complexity could reduce development costs and time. Measures of icon complexity developed by Garcia, Badre, and Stasko (1994) and McDougall, Curry, and de Bruijn (1999) were correlated with six icon properties measured using Matlab (MathWorks, 2001) software, which uses image-processing techniques to measure icon properties. The six icon properties measured were icon foreground, the number of objects in an icon, the number of holes in those objects, and two calculations of icon edges and homogeneity in icon structure. The strongest correlates with human judgments of perceived icon complexity (McDougall et al., 1999) were structural variability (r(s) = .65) and edge information (r(s) =.64).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cloud environments, IT solutions are delivered to users via shared infrastructure. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A key objective of cloud providers is thus to develop resource provisioning and management solutions at minimum energy consumption while still guaranteeing Service Level Agreements (SLAs). However, a thorough understanding of both system performance and energy consumption patterns in complex cloud systems is imperative to achieve a balance of energy efficiency and acceptable performance. In this paper, we present StressCloud, a performance and energy consumption analysis tool for cloud systems. StressCloud can automatically generate load tests and profile system performance and energy consumption data. Using StressCloud, we have conducted extensive experiments to profile and analyse system performance and energy consumption with different types and mixes of runtime tasks. We collected finegrained energy consumption and performance data with different resource allocation strategies, system configurations and workloads. The experimental results show the correlation coefficients of energy consumption, system resource allocation strategies and workload, as well as the performance of the cloud applications. Our results can be used to guide the design and deployment of cloud applications to balance energy and performance requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sensitive, selective, and reproducible in-tube solid-phase microextraction and liquid chromatographic (in-tube SPME/LC-UV) method for determination of lidocaine and its metabolite monoethylglycinexylidide (MEGX) in human plasma has been developed, validated, and further applied to pharmacokinetic study in pregnant women with gestational diabetes mellitus (GDM) subjected to epidural anesthesia. Important factors in the optimization of in-tube SPME performance are discussed, including the draw/eject sample volume, draw/eject cycle number, draw/eject flow rate, sample pH, and influence of plasma proteins. The limits of quantification of the in-tube SPME/LC method were 50 ng/mL for both metabolite and lidocaine. The interday and intraday precision had coefficients of variation lower than 8%, and accuracy ranged from 95 to 117%. The response of the in-tube SPME/LC method for analytes was linear over a dynamic range from 50 to 5000 ng/mL, with correlation coefficients higher than 0.9976. The developed in-tube SPME/LC method was successfully used to analyze lidocaine and its metabolite in plasma samples from pregnant women with GDM subjected to epidural anesthesia for pharmacokinetic study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A first phase of the research activity has been related to the study of the state of art of the infrastructures for cycling, bicycle use and methods for evaluation. In this part, the candidate has studied the "bicycle system" in countries with high bicycle use and in particular in the Netherlands. Has been carried out an evaluation of the questionnaires of the survey conducted within the European project BICY on mobility in general in 13 cities of the participating countries. The questionnaire was designed, tested and implemented, and was later validated by a test in Bologna. The results were corrected with information on demographic situation and compared with official data. The cycling infrastructure analysis was conducted on the basis of information from the OpenStreetMap database. The activity consisted in programming algorithms in Python that allow to extract data from the database infrastructure for a region, to sort and filter cycling infrastructure calculating some attributes, such as the length of the arcs paths. The results obtained were compared with official data where available. The structure of the thesis is as follows: 1. Introduction: description of the state of cycling in several advanced countries, description of methods of analysis and their importance to implement appropriate policies for cycling. Supply and demand of bicycle infrastructures. 2. Survey on mobility: it gives details of the investigation developed and the method of evaluation. The results obtained are presented and compared with official data. 3. Analysis cycling infrastructure based on information from the database of OpenStreetMap: describes the methods and algorithms developed during the PhD. The results obtained by the algorithms are compared with official data. 4. Discussion: The above results are discussed and compared. In particular the cycle demand is compared with the length of cycle networks within a city. 5. Conclusions

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe an algorithm that automatically detects and labels peaks I - VII of the normal, suprathreshold auditory brainstem response (ABR). The algorithm proceeds in three stages, with the option of a fourth: ( 1) all candidate peaks and troughs in the ABR waveform are identified using zero crossings of the first derivative, ( 2) peaks I - VII are identified from these candidate peaks based on their latency and morphology, ( 3) if required, peaks II and IV are identified as points of inflection using zero crossings of the second derivative and ( 4) interpeak troughs are identified before peak latencies and amplitudes are measured. The performance of the algorithm was estimated on a set of 240 normal ABR waveforms recorded using a stimulus intensity of 90 dBnHL. When compared to an expert audiologist, the algorithm correctly identified the major ABR peaks ( I, III and V) in 96 - 98% of the waveforms and the minor ABR peaks ( II, IV, VI and VII) in 45 - 83% of waveforms. Whilst peak II was correctly identified in only 83% and peak IV in 77% of waveforms, it was shown that 5% of the peak II identifications and 31% of the peak IV identifications came as a direct result of allowing these peaks to be found as points of inflection. Copyright (C) 2005 S. Karger AG, Basel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Automated measurement of LV function could extend the clinical utility of echo by less expert readers. We sought to define normal ranges of global 2D strain (2DS) and strain-rate (SR) in an international, multicenter study of healthy subjects, and to assess the determinants of variation. Methods: SR and 2DS were measured in 18 myocardial segts in both apical and short axis views of 227 normal subjects (38% men, 48±14y) with no cardiac history, risk factors or drug therapy. The association of age and resting hemodynamics with global strain indices was sought using multiple regression. Differences in variance were expressed as F values. Results: Baseline SBP was 127±18 mmHg, pulse was 76±13/min and ejection fraction 50±20%. Although global longitudinal strain was influenced by endsystolic volume (F=4.2, p

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose The aim of the study was to determine the association, agreement, and detection capability of manual, semiautomated, and fully automated methods of corneal nerve fiber length (CNFL) quantification of the human corneal subbasal nerve plexus (SNP). Methods Thirty-three participants with diabetes and 17 healthy controls underwent laser scanning corneal confocal microscopy. Eight central images of the SNP were selected for each participant and analyzed using manual (CCMetrics), semiautomated (NeuronJ), and fully automated (ACCMetrics) software to quantify the CNFL. Results For the entire cohort, mean CNFL values quantified by CCMetrics, NeuronJ, and ACCMetrics were 17.4 ± 4.3 mm/mm2, 16.0 ± 3.9 mm/mm2, and 16.5 ± 3.6 mm/mm2, respectively (P < 0.01). CNFL quantified using CCMetrics was significantly higher than those obtained by NeuronJ and ACCMetrics (P < 0.05). The 3 methods were highly correlated (correlation coefficients 0.87–0.98, P < 0.01). The intraclass correlation coefficients were 0.87 for ACCMetrics versus NeuronJ and 0.86 for ACCMetrics versus CCMetrics. Bland–Altman plots showed good agreement between the manual, semiautomated, and fully automated analyses of CNFL. A small underestimation of CNFL was observed using ACCMetrics with increasing the amount of nerve tissue. All 3 methods were able to detect CNFL depletion in diabetic participants (P < 0.05) and in those with peripheral neuropathy as defined by the Toronto criteria, compared with healthy controls (P < 0.05). Conclusions Automated quantification of CNFL provides comparable neuropathy detection ability to manual and semiautomated methods. Because of its speed, objectivity, and consistency, fully automated analysis of CNFL might be advantageous in studies of diabetic neuropathy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective