880 resultados para FEATURE EXTRACTION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The algorithm creates a buffer area around the cartographic features of interest in one of the images and compare it with the other one. During the comparison, the algorithm calculates the number of equals and different points and uses it to calculate the statistical values of the analysis. One calculated statistical value is the correctness, which shows the user the percentage of points that were correctly extracted. Another one is the completeness that shows the percentage of points that really belong to the interest feature. And the third value shows the idea of quality obtained by the extraction method, since that in order to calculate the quality the algorithm uses the correctness and completeness previously calculated. All the performed tests using this algorithm were possible to use the statistical values calculated to represent quantitatively the quality obtained by the extraction method executed. So, it is possible to say that the developed algorithm can be used to analyze extraction methods of cartographic features of interest, since that the results obtained were promising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the physicochemical properties of avocado pulp of four different varieties (Avocado, Guatemala, Dickinson, and Butter pear) and to identify which has the greatest potential for oil extraction. Fresh avocado pulp was characterized by moisture, protein, fat, ash, carbohydrates and energy contents were determined. The carotenoids and chlorophyll contents were determined by the organic solvent extraction method. The results showed significant differences in the composition of the fruit when varieties are compared. However, the striking feature in all varieties is high lipid content; Avocado and Dickinson are the most suitable varieties for oil extraction, taking into account moisture content and the levels of lipids in the pulp. Moreover, it could be said that the variety Dickinson is the most affected by the parameters evaluated in terms of overall quality. Chlorophyll and carotenoids, fat-soluble pigments, showed a negative correlation with respect to lipids since it could be related to its function in the fruit. The varieties Avocado and Dickinson are an alternative to oil extraction having great commercial potential to be exploited thus avoiding waste and increasing farmers income.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiomics is the high-throughput extraction and analysis of quantitative image features. For non-small cell lung cancer (NSCLC) patients, radiomics can be applied to standard of care computed tomography (CT) images to improve tumor diagnosis, staging, and response assessment. The first objective of this work was to show that CT image features extracted from pre-treatment NSCLC tumors could be used to predict tumor shrinkage in response to therapy. This is important since tumor shrinkage is an important cancer treatment endpoint that is correlated with probability of disease progression and overall survival. Accurate prediction of tumor shrinkage could also lead to individually customized treatment plans. To accomplish this objective, 64 stage NSCLC patients with similar treatments were all imaged using the same CT scanner and protocol. Quantitative image features were extracted and principal component regression with simulated annealing subset selection was used to predict shrinkage. Cross validation and permutation tests were used to validate the results. The optimal model gave a strong correlation between the observed and predicted shrinkages with . The second objective of this work was to identify sets of NSCLC CT image features that are reproducible, non-redundant, and informative across multiple machines. Feature sets with these qualities are needed for NSCLC radiomics models to be robust to machine variation and spurious correlation. To accomplish this objective, test-retest CT image pairs were obtained from 56 NSCLC patients imaged on three CT machines from two institutions. For each machine, quantitative image features with concordance correlation coefficient values greater than 0.90 were considered reproducible. Multi-machine reproducible feature sets were created by taking the intersection of individual machine reproducible feature sets. Redundant features were removed through hierarchical clustering. The findings showed that image feature reproducibility and redundancy depended on both the CT machine and the CT image type (average cine 4D-CT imaging vs. end-exhale cine 4D-CT imaging vs. helical inspiratory breath-hold 3D CT). For each image type, a set of cross-machine reproducible, non-redundant, and informative image features was identified. Compared to end-exhale 4D-CT and breath-hold 3D-CT, average 4D-CT derived image features showed superior multi-machine reproducibility and are the best candidates for clinical correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: In molecular biology, molecular events describe observable alterations of biomolecules, such as binding of proteins or RNA production. These events might be responsible for drug reactions or development of certain diseases. As such, biomedical event extraction, the process of automatically detecting description of molecular interactions in research articles, attracted substantial research interest recently. Event trigger identification, detecting the words describing the event types, is a crucial and prerequisite step in the pipeline process of biomedical event extraction. Taking the event types as classes, event trigger identification can be viewed as a classification task. For each word in a sentence, a trained classifier predicts whether the word corresponds to an event type and which event type based on the context features. Therefore, a well-designed feature set with a good level of discrimination and generalization is crucial for the performance of event trigger identification. Results: In this article, we propose a novel framework for event trigger identification. In particular, we learn biomedical domain knowledge from a large text corpus built from Medline and embed it into word features using neural language modeling. The embedded features are then combined with the syntactic and semantic context features using the multiple kernel learning method. The combined feature set is used for training the event trigger classifier. Experimental results on the golden standard corpus show that >2.5% improvement on F-score is achieved by the proposed framework when compared with the state-of-the-art approach, demonstrating the effectiveness of the proposed framework. © 2014 The Author 2014. The source code for the proposed framework is freely available and can be downloaded at http://cse.seu.edu.cn/people/zhoudeyu/ETI_Sourcecode.zip.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic video segmentation plays a vital role in sports videos annotation. This paper presents a fully automatic and computationally efficient algorithm for analysis of sports videos. Various methods of automatic shot boundary detection have been proposed to perform automatic video segmentation. These investigations mainly concentrate on detecting fades and dissolves for fast processing of the entire video scene without providing any additional feedback on object relativity within the shots. The goal of the proposed method is to identify regions that perform certain activities in a scene. The model uses some low-level feature video processing algorithms to extract the shot boundaries from a video scene and to identify dominant colours within these boundaries. An object classification method is used for clustering the seed distributions of the dominant colours to homogeneous regions. Using a simple tracking method a classification of these regions to active or static is performed. The efficiency of the proposed framework is demonstrated over a standard video benchmark with numerous types of sport events and the experimental results show that our algorithm can be used with high accuracy for automatic annotation of active regions for sport videos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To detect the presence of male DNA in vaginal samples collected from survivors of sexual violence and stored on filter paper. A pilot study was conducted to evaluate 10 vaginal samples spotted on sterile filter paper: 6 collected at random in April 2009 and 4 in October 2010. Time between sexual assault and sample collection was 4-48hours. After drying at room temperature, the samples were placed in a sterile envelope and stored for 2-3years until processing. DNA extraction was confirmed by polymerase chain reaction for human β-globin, and the presence of prostate-specific antigen (PSA) was quantified. The presence of the Y chromosome was detected using primers for sequences in the TSPY (Y7/Y8 and DYS14) and SRY genes. β-Globin was detected in all 10 samples, while 2 samples were positive for PSA. Half of the samples amplified the Y7/Y8 and DYS14 sequences of the TSPY gene and 30% amplified the SRY gene sequence of the Y chromosome. Four male samples and 1 female sample served as controls. Filter-paper spots stored for periods of up to 3years proved adequate for preserving genetic material from vaginal samples collected following sexual violence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the current study, a new approach has been developed for correcting the effect that moisture reduction after virgin olive oil (VOO) filtration exerts on the apparent increase of the secoiridoid content by using an internal standard during extraction. Firstly, two main Spanish varieties (Picual and Hojiblanca) were submitted to industrial filtration of VOOs. Afterwards, the moisture content was determined in unfiltered and filtered VOOs, and liquid-liquid extraction of phenolic compounds was performed using different internal standards. The resulting extracts were analyzed by HPLC-ESI-TOF/MS, in order to gain maximum information concerning the phenolic profiles of the samples under study. The reduction effect of filtration on the moisture content, phenolic alcohols, and flavones was confirmed at the industrial scale. Oleuropein was chosen as internal standard and, for the first time, the apparent increase of secoiridoids in filtered VOO was corrected, using a correction coefficient (Cc) calculated from the variation of internal standard area in filtered and unfiltered VOO during extraction. This approach gave the real concentration of secoiridoids in filtered VOO, and clarified the effect of the filtration step on the phenolic fraction. This finding is of great importance for future studies that seek to quantify phenolic compounds in VOOs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Originally from Asia, Dovyalis hebecarpa is a dark purple/red exotic berry now also produced in Brazil. However, no reports were found in the literature about phenolic extraction or characterisation of this berry. In this study we evaluate the extraction optimisation of anthocyanins and total phenolics in D. hebecarpa berries aiming at the development of a simple and mild analytical technique. Multivariate analysis was used to optimise the extraction variables (ethanol:water:acetone solvent proportions, times, and acid concentrations) at different levels. Acetone/water (20/80 v/v) gave the highest anthocyanin extraction yield, but pure water and different proportions of acetone/water or acetone/ethanol/water (with >50% of water) were also effective. Neither acid concentration nor time had a significant effect on extraction efficiency allowing to fix the recommended parameters at the lowest values tested (0.35% formic acid v/v, and 17.6 min). Under optimised conditions, extraction efficiencies were increased by 31.5% and 11% for anthocyanin and total phenolics, respectively as compared to traditional methods that use more solvent and time. Thus, the optimised methodology increased yields being less hazardous and time consuming than traditional methods. Finally, freeze-dried D. hebecarpa showed high content of target phytochemicals (319 mg/100g and 1,421 mg/100g of total anthocyanin and total phenolic content, respectively).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extraction processes are largely used in many chemical, biotechnological and pharmaceutical industries for recovery of bioactive compounds from medicinal plants. To replace the conventional extraction techniques, new techniques as high-pressure extraction processes that use environment friendly solvents have been developed. However, these techniques, sometimes, are associated with low extraction rate. The ultrasound can be effectively used to improve the extraction rate by the increasing the mass transfer and possible rupture of cell wall due the formation of microcavities leading to higher product yields with reduced processing time and solvent consumption. This review presents a brief survey about the mechanism and aspects that affecting the ultrasound assisted extraction focusing on the use of ultrasound irradiation for high-pressure extraction processes intensification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purified genomic DNA can be difficult to obtain from some plant species because of the presence of impurities such as polysaccharides, which are often co-extracted with DNA. In this study, we developed a fast, simple, and low-cost protocol for extracting DNA from plants containing high levels of secondary metabolites. This protocol does not require the use of volatile toxic reagents such as mercaptoethanol, chloroform, or phenol and allows the extraction of high-quality DNA from wild and cultivated tropical species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extracts from malagueta pepper (Capsicum frutescens L.) were obtained using supercritical fluid extraction (SFE) assisted by ultrasound, with carbon dioxide as solvent at 15MPa and 40°C. The SFE global yield increased up to 77% when ultrasound waves were applied, and the best condition of ultrasound-assisted extraction was ultrasound power of 360W applied during 60min. Four capsaicinoids were identified in the extracts and quantified by high performance liquid chromatography. The use of ultrasonic waves did not influence significantly the capsaicinoid profiles and the phenolic content of the extracts. However, ultrasound has enhanced the SFE rate. A model based on the broken and intact cell concept was adequate to represent the extraction kinetics and estimate the mass transfer coefficients, which were increased with ultrasound. Images obtained by field emission scanning electron microscopy showed that the action of ultrasonic waves did not cause cracks on the cell wall surface. On the other hand, ultrasound promoted disturbances in the vegetable matrix, leading to the release of extractable material on the solid surface. The effects of ultrasound were more significant on SFE from larger solid particles.