850 resultados para Local classification method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Finite element models of augmented vertebral bodies require a realistic modelling of the cement infiltrated region. Most methods published so far used idealized cement shapes or oversimplified material models for the augmented region. In this study, an improved, anatomy-specific, homogenized finite element method was developed and validated to predict the apparent as well as the local mechanical behavior of augmented vertebral bodies. Methods Forty-nine human vertebral body sections were prepared by removing the cortical endplates and scanned with high-resolution peripheral quantitative CT before and after injection of a standard and a low-modulus bone cement. Forty-one specimens were tested in compression to measure stiffness, strength and contact pressure distributions between specimens and loading-plates. From the remaining eight, fourteen cylindrical specimens were extracted from the augmented region and tested in compression to obtain material properties. Anatomy-specific finite element models were generated from the CT data. The models featured element-specific, density-fabric-based material properties, damage accumulation, real cement distributions and experimentally determined material properties for the augmented region. Apparent stiffness and strength as well as contact pressure distributions at the loading plates were compared between simulations and experiments. Findings The finite element models were able to predict apparent stiffness (R2 > 0.86) and apparent strength (R2 > 0.92) very well. Also, the numerically obtained pressure distributions were in reasonable quantitative (R2 > 0.48) and qualitative agreement with the experiments. Interpretation The proposed finite element models have proven to be an accurate tool for studying the apparent as well as the local mechanical behavior of augmented vertebral bodies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The north-eastern escarpment of Madagascar has been labelled a global biodiversity hotspot due to its extremely high rates of endemic species which are heavily threatened by accelerated deforestation rates and landscape change. The traditional practice of shifting cultivation or "tavy" used by the majority of land users in this area to produce subsistence rice is commonly blamed for these threats. A wide range of stakeholders ranging from conservation to development agencies, and from the private to the public sector has therefore been involved in trying to find solutions to protect the remaining forest fragments and to increase agricultural production. Consequently, provisioning, regulating and socio-cultural services of this forest-mosaic landscape are fundamentally altered leading to trade-offs between them and consequently new winners and losers amongst the stakeholders at different scales. However, despite a growing amount of evidence from case studies analysing local changes, the regional dynamics of the landscape and their contribution to such trade-offs remain poorely understood. This study therefore aims at using generalised landscape units as a base for the assessment of multi-level stakeholder claims on ecosystem services to inform negotiation, planning and decision making at a meso-scale. The presented study applies a mixed-method approach combining remote sensing, GIS and socio-economic methods to reveal current landscape dynamics, their change over time and the corresponding ecosystem service trade-offs induced by diverse stakeholder claims on the regional level. In a first step a new regional land cover classification for three points in time (1995, 2005 and 2011) was conducted including agricultural classes characteristic for shifting cultivation systems. Secondly, a novel GIS approach, termed “landscape mosaics approach” originally developed to assess dynamics of shifting cultivation landscapes in Laos was applied. Through this approach generalised landscape mosaics were generated allowing for a better understanding of changes in land use intensities instead of land cover. As a next step we will try to use these landscape units as proxies to map provisioning and regulating ecosystem services throughout the region. Through the overlay with other regional background data such as accessibility and population density and information from a region-wide stakeholder analysis, multiscale trade-offs between different services will be highlighted. The trade-offs observed on the regional scale will then be validated through a socio-economic ground-truthing within selected sites at the local scale. We propose that such meso-scale knowledge is required by all stakeholders involved in decision making towards sustainable development of north-eastern Madagascar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular and for improving healthcare quality and patient safety in general. Method The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. Results The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. Conclusions Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper, a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular as well as improving healthcare quality and patient safety in general. METHOD: The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. RESULTS: The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. CONCLUSIONS: Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To develop and implement a method for improved cerebellar tissue classification on the MRI of brain by automatically isolating the cerebellum prior to segmentation. MATERIALS AND METHODS: Dual fast spin echo (FSE) and fluid attenuation inversion recovery (FLAIR) images were acquired on 18 normal volunteers on a 3 T Philips scanner. The cerebellum was isolated from the rest of the brain using a symmetric inverse consistent nonlinear registration of individual brain with the parcellated template. The cerebellum was then separated by masking the anatomical image with individual FLAIR images. Tissues in both the cerebellum and rest of the brain were separately classified using hidden Markov random field (HMRF), a parametric method, and then combined to obtain tissue classification of the whole brain. The proposed method for tissue classification on real MR brain images was evaluated subjectively by two experts. The segmentation results on Brainweb images with varying noise and intensity nonuniformity levels were quantitatively compared with the ground truth by computing the Dice similarity indices. RESULTS: The proposed method significantly improved the cerebellar tissue classification on all normal volunteers included in this study without compromising the classification in remaining part of the brain. The average similarity indices for gray matter (GM) and white matter (WM) in the cerebellum are 89.81 (+/-2.34) and 93.04 (+/-2.41), demonstrating excellent performance of the proposed methodology. CONCLUSION: The proposed method significantly improved tissue classification in the cerebellum. The GM was overestimated when segmentation was performed on the whole brain as a single object.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Caesarean section (CS) rates have risen over the past two decades. The aim of this observational study was to identify time-dependent variations in CS and vaginal delivery rates over a period of 11 years. METHOD All deliveries (13,701 deliveries during the period 1999-2009) at the University Women's Hospital Bern were analysed using an internationally standardised and approved ten-group classification system. Caesarean sections on maternal request (CSMR) were evaluated separately. RESULTS We detected an overall CS rate of 36.63% and an increase in the CS rate over time (p <0.001). Low-risk profile groups were the two largest populations and displayed low CS rates, with significantly decreasing relative size over time. The relative size of groups with induced labour increased significantly, but this did not have an impact on the overall CS rate. Pregnancies complicated by breech position, multiple pregnancies and abnormal lies did not have an impact on overall CS rate. The biggest contributor to a high CS rate was preterm delivery and the existence of a uterine scar from a previous CS. CSMR was 1.45% and did not have an impact on the overall CS rate. CONCLUSION The observational study identified wide variations in caesarean section and vaginal delivery rates across the groups over time, and a shift towards high-risk populations was noted. The biggest contributors to high CS rates were identified; namely, previous uterine scar and preterm delivery. Interventions aiming to reduce CS rates are planned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a new fully-automatic method for localizing and segmenting 3D intervertebral discs from MR images, where the two problems are solved in a unified data-driven regression and classification framework. We estimate the output (image displacements for localization, or fg/bg labels for segmentation) of image points by exploiting both training data and geometric constraints simultaneously. The problem is formulated in a unified objective function which is then solved globally and efficiently. We validate our method on MR images of 25 patients. Taking manually labeled data as the ground truth, our method achieves a mean localization error of 1.3 mm, a mean Dice metric of 87%, and a mean surface distance of 1.3 mm. Our method can be applied to other localization and segmentation tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The long-term integrity of protected areas (PAs), and hence the maintenance of related ecosystem services (ES), are dependent on the support of local people. In the present study, local people's perceptions of ecosystem services from PAs and factors that govern local preferences for PAs are assessed. Fourteen study villages were randomly selected from three different protected forest areas and one control site along the southern coast of Côte d'Ivoire. Data was collected through a mixed-method approach, including qualitative semi-structured interviews and a household survey based on hypothetical choice scenarios. Local people's perceptions of ecosystem service provision was decrypted through qualitative content analysis, while the relation between people's preferences and potential factors that affect preferences were analyzed through multinomial models. This study shows that rural villagers do perceive a number of different ecosystem services as benefits from PAs in Côte d'Ivoire. The results based on quantitative data also suggest that local preferences for PAs and related ecosystem services are driven by PAs' management rules, age, and people's dependence on natural resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the asymptotic form of the bulk Weyl tensor, we present an explicit approach that allows us to reconstruct exact four-dimensional Einstein spacetimes which are algebraically special with respect to Petrov’s classification. If the boundary metric supports a traceless, symmetric and conserved complex rank-two tensor, which is related to the boundary Cotton and energy-momentum tensors, and if the hydrodynamic congruence is shearless, then the bulk metric is exactly resummed and captures modes that stand beyond the hydrodynamic derivative expansion. We illustrate the method when the congruence has zero vorticity, leading to the Robinson-Trautman spacetimes of arbitrary Petrov class, and quote the case of non-vanishing vorticity, which captures the Plebański-Demiański Petrov D family.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal adjustment of brain networks allows the biased processing of information in response to the demand of environments and is therefore prerequisite for adaptive behaviour. It is widely shown that a biased state of networks is associated with a particular cognitive process. However, those associations were identified by backward categorization of trials and cannot provide a causal association with cognitive processes. This problem still remains a big obstacle to advance the state of our field in particular human cognitive neuroscience. In my talk, I will present two approaches to address the causal relationships between brain network interactions and behaviour. Firstly, we combined connectivity analysis of fMRI data and a machine leaning method to predict inter-individual differences of behaviour and responsiveness to environmental demands. The connectivity-based classification approach outperforms local activation-based classification analysis, suggesting that interactions in brain networks carry information of instantaneous cognitive processes. Secondly, we have recently established a brand new method combining transcranial alternating current stimulation (tACS), transcranial magnetic stimulation (TMS), and EEG. We use the method to measure signal transmission between brain areas while introducing extrinsic oscillatory brain activity and to study causal association between oscillatory activity and behaviour. We show that phase-matched oscillatory activity creates the phase-dependent modulation of signal transmission between brain areas, while phase-shifted oscillatory activity blunts the phase-dependent modulation. The results suggest that phase coherence between brain areas plays a cardinal role in signal transmission in the brain networks. In sum, I argue that causal approaches will provide more concreate backbones to cognitive neuroscience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2×2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance (~85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.