13 resultados para Pattern Informatics Method

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For crime scene investigation in cases of homicide, the pattern of bloodstains at the incident site is of critical importance. The morphology of the bloodstain pattern serves to determine the approximate blood source locations, the minimum number of blows and the positioning of the victim. In the present work, the benefits of the three-dimensional bloodstain pattern analysis, including the ballistic approximation of the trajectories of the blood drops, will be demonstrated using two illustrative cases. The crime scenes were documented in 3D, using the non-contact methods digital photogrammetry, tachymetry and laser scanning. Accurate, true-to-scale 3D models of the crime scenes, including the bloodstain pattern and the traces, were created. For the determination of the areas of origin of the bloodstain pattern, the trajectories of up to 200 well-defined bloodstains were analysed in CAD and photogrammetry software. The ballistic determination of the trajectories was performed using ballistics software. The advantages of this method are the short preparation time on site, the non-contact measurement of the bloodstains and the high accuracy of the bloodstain analysis. It should be expected that this method delivers accurate results regarding the number and position of the areas of origin of bloodstains, in particular the vertical component is determined more precisely than using conventional methods. In both cases relevant forensic conclusions regarding the course of events were enabled by the ballistic bloodstain pattern analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes informatics for cross-sample analysis with comprehensive two-dimensional gas chromatography (GCxGC) and high-resolution mass spectrometry (HRMS). GCxGC-HRMS analysis produces large data sets that are rich with information, but highly complex. The size of the data and volume of information requires automated processing for comprehensive cross-sample analysis, but the complexity poses a challenge for developing robust methods. The approach developed here analyzes GCxGC-HRMS data from multiple samples to extract a feature template that comprehensively captures the pattern of peaks detected in the retention-times plane. Then, for each sample chromatogram, the template is geometrically transformed to align with the detected peak pattern and generate a set of feature measurements for cross-sample analyses such as sample classification and biomarker discovery. The approach avoids the intractable problem of comprehensive peak matching by using a few reliable peaks for alignment and peak-based retention-plane windows to define comprehensive features that can be reliably matched for cross-sample analysis. The informatics are demonstrated with a set of 18 samples from breast-cancer tumors, each from different individuals, six each for Grades 1-3. The features allow classification that matches grading by a cancer pathologist with 78% success in leave-one-out cross-validation experiments. The HRMS signatures of the features of interest can be examined for determining elemental compositions and identifying compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has long been known that trypanosomes regulate mitochondrial biogenesis during the life cycle of the parasite; however, the mitochondrial protein inventory (MitoCarta) and its regulation remain unknown. We present a novel computational method for genome-wide prediction of mitochondrial proteins using a support vector machine-based classifier with approximately 90% prediction accuracy. Using this method, we predicted the mitochondrial localization of 468 proteins with high confidence and have experimentally verified the localization of a subset of these proteins. We then applied a recently developed parallel sequencing technology to determine the expression profiles and the splicing patterns of a total of 1065 predicted MitoCarta transcripts during the development of the parasite, and showed that 435 of the transcripts significantly changed their expressions while 630 remain unchanged in any of the three life stages analyzed. Furthermore, we identified 298 alternatively splicing events, a small subset of which could lead to dual localization of the corresponding proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Full axon counting of optic nerve cross-sections represents the most accurate method to quantify axonal damage, but such analysis is very labour intensive. Recently, a new method has been developed, termed targeted sampling, which combines the salient features of a grading scheme with axon counting. Preliminary findings revealed the method compared favourably with random sampling. The aim of the current study was to advance our understanding of the effect of sampling patterns on axon counts by comparing estimated axon counts from targeted sampling with those obtained from fixed-pattern sampling in a large collection of optic nerves with different severities of axonal injury.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional magnetic resonance imaging (fMRI) is presently either performed using blood oxygenation level-dependent (BOLD) contrast or using cerebral blood flow (CBF), measured with arterial spin labeling (ASL) technique. The present fMRI study aimed to provide practical hints to favour one method over the other. It involved three different acquisition methods during visual checkerboard stimulation on nine healthy subjects: 1) CBF contrast obtained from ASL, 2) BOLD contrast extracted from ASL and 3) BOLD contrast from Echo planar imaging. Previous findings were replicated; i) no differences between the three measurements were found in the location of the activated region; ii) differences were found in the temporal characteristics of the signals and iii) BOLD has significantly higher sensitivity than ASL perfusion. ASL fMRI was favoured when the investigation demands for perfusion and task related signal changes. BOLD fMRI is more suitable in conjunction with fast event related design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECT: In this study, 1H magnetic resonance (MR) spectroscopy was prospectively tested as a reliable method for presurgical grading of neuroepithelial brain tumors. METHODS: Using a database of tumor spectra obtained in patients with histologically confirmed diagnoses, 94 consecutive untreated patients were studied using single-voxel 1H spectroscopy (point-resolved spectroscopy; TE 135 msec, TE 135 msec, TR 1500 msec). A total of 90 tumor spectra obtained in patients with diagnostic 1H MR spectroscopy examinations were analyzed using commercially available software (MRUI/VARPRO) and classified using linear discriminant analysis as World Health Organization (WHO) Grade I/II, WHO Grade III, or WHO Grade IV lesions. In all cases, the classification results were matched with histopathological diagnoses that were made according to the WHO classification criteria after serial stereotactic biopsy procedures or open surgery. Histopathological studies revealed 30 Grade I/II tumors, 29 Grade III tumors, and 31 Grade IV tumors. The reliability of the histological diagnoses was validated considering a minimum postsurgical follow-up period of 12 months (range 12-37 months). Classifications based on spectroscopic data yielded 31 tumors in Grade I/II, 32 in Grade III, and 27 in Grade IV. Incorrect classifications included two Grade II tumors, one of which was identified as Grade III and one as Grade IV; two Grade III tumors identified as Grade II; two Grade III lesions identified as Grade IV; and six Grade IV tumors identified as Grade III. Furthermore, one glioblastoma (WHO Grade IV) was classified as WHO Grade I/II. This represents an overall success rate of 86%, and a 95% success rate in differentiating low-grade from high-grade tumors. CONCLUSIONS: The authors conclude that in vivo 1H MR spectroscopy is a reliable technique for grading neuroepithelial brain tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arterial waves are seen as possible independent mediators of cardiovascular risks, and the wave intensity analysis (WIA) has therefore been proposed as a method for patient selection for ventricular assist device (VAD) implantation. Interpreting measured wave intensity (WI) is challenging and complexity is increased by the implantation of a VAD. The waves generated by the VAD interact with the waves generated by the native heart, and this interaction varies with changing VAD settings. Eight sheep were implanted with a pulsatile VAD (PVAD) through ventriculo-aortic cannulation. The start of PVAD ejection was synchronized to the native R-wave and delayed between 0 % - 90 % of the cardiac cycle in 10 % steps or phase shifts (PS). Pressure and velocity signals were registered, using a combined Doppler and pressure wire positioned in the abdominal aorta, and used to calculate the WI. Depending on the PS, different wave interference phenomena occurred. Maximum unloading of the left ventricle (LV) coincided with constructive interference and maximum blood flow pulsatility, and maximum loading of the LV coincided with destructive interference and minimum blood flow pulsatility. We believe, that non-invasive WIA could potentially be used clinically to assess the mechanical load of the LV, and to monitor the peripheral hemodynamics such as blood flow pulsatility and risk of intestinal bleeding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Proteomics approaches to cardiovascular biology and disease hold the promise of identifying specific proteins and peptides or modification thereof to assist in the identification of novel biomarkers. METHOD: By using surface-enhanced laser desorption and ionization time of flight mass spectroscopy (SELDI-TOF-MS) serum peptide and protein patterns were detected enabling to discriminate between postmenopausal women with and without hormone replacement therapy (HRT). RESULTS: Serum of 13 HRT and 27 control subjects was analyzed and 42 peptides and proteins could be tentatively identified based on their molecular weight and binding characteristics on the chip surface. By using decision tree-based Biomarker Patternstrade mark Software classification and regression analysis a discriminatory function was developed allowing to distinguish between HRT women and controls correctly and, thus, yielding a sensitivity of 100% and a specificity of 100%. The results show that peptide and protein patterns have the potential to deliver novel biomarkers as well as pinpointing targets for improved treatment. The biomarkers obtained represent a promising tool to discriminate between HRT users and non-users. CONCLUSION: According to a tentative identification of the markers by their molecular weight and binding characteristics, most of them appear to be part of the inflammation induced acute-phase response

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE The aim of this study was to evaluate whether the distribution pattern of early ischemic changes in the initial MRI allows a practical method for estimating leptomeningeal collateralization in acute ischemic stroke (AIS). METHODS Seventy-four patients with AIS underwent MRI followed by conventional angiogram and mechanical thrombectomy. Diffusion restriction in Diffusion weighted imaging (DWI) and correlated T2-hyperintensity of the infarct were retrospectively analyzed and subdivided in accordance with Alberta Stroke Program Early CT score (ASPECTS). Patients were angiographically graded in collateralization groups according to the method of Higashida, and dichotomized in 2 groups: 29 subjects with collateralization grade 3 or 4 (well-collateralized group) and 45 subjects with grade 1 or 2 (poorly-collateralized group). Individual ASPECTS areas were compared among the groups. RESULTS Means for overall DWI-ASPECTS were 6.34 vs. 4.51 (well vs. poorly collateralized groups respectively), and for T2-ASPECTS 9.34 vs 8.96. A significant difference between groups was found for DWI-ASPECTS (p<0.001), but not for T2-ASPECTS (p = 0.088). Regarding the individual areas, only insula, M1-M4 and M6 showed significantly fewer infarctions in the well-collateralized group (p-values <0.001 to 0.015). 89% of patients in the well-collateralized group showed 0-2 infarctions in these six areas (44.8% with 0 infarctions), while 59.9% patients of the poor-collateralized group showed 3-6 infarctions. CONCLUSION Patients with poor leptomeningeal collateralization show more infarcts on the initial MRI, particularly in the ASPECTS areas M1 to M4, M6 and insula. Therefore DWI abnormalities in these areas may be a surrogate marker for poor leptomeningeal collaterals and may be useful for estimation of the collateral status in routine clinical evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40-70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory's own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2×2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance (~85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Behavioural tests to assess affective states are widely used in human research and have recently been extended to animals. These tests assume that affective state influences cognitive processing, and that animals in a negative affective state interpret ambiguous information as expecting a negative outcome (displaying a negative cognitive bias). Most of these tests however, require long discrimination training. The aim of the study was to validate an exploration based cognitive bias test, using two different handling methods, as previous studies have shown that standard tail handling of mice increases physiological and behavioural measures of anxiety compared to cupped handling. Therefore, we hypothesised that tail handled mice would display a negative cognitive bias. We handled 28 female CD-1 mice for 16 weeks using either tail handling or cupped handling. The mice were then trained in an eight arm radial maze, where two adjacent arms predicted a positive outcome (darkness and food), while the two opposite arms predicted a negative outcome (no food, white noise and light). After six days of training, the mice were also given access to the four previously unavailable intermediate ambiguous arms of the radial maze and tested for cognitive bias. We were unable to validate this test, as mice from both handling groups displayed a similar pattern of exploration. Furthermore, we examined whether maze exploration is affected by the expression of stereotypic behaviour in the home cage. Mice with higher levels of stereotypic behaviour spent more time in positive arms and avoided ambiguous arms, displaying a negative cognitive bias. While this test needs further validation, our results indicate that it may allow the assessment of affective state in mice with minimal training— a major confound in current cognitive bias paradigms.