92 resultados para Feature-extraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supercritical carbon dioxide (SC-CO(2)) extraction was employed to extract carotenoids from the freeze-dried pulp of pitanga fruits (Eugenia uniflora L.), an exotic fruit, rich in carotenoids and still little explored commercially. The SC-CO(2) extraction was carried out at two temperatures, 40 and 60 degrees C, and seven pressures, 100, 150, 200, 250, 300, 350 and 400 bar. The carotenoids were determined by high-performance liquid chromatography connected to photodiode array and mass spectrometry detectors. Lycopene, rubixanthin and P-cryptoxanthin were the main carotenoids present in the freeze-dried pitanga pulp, whereas beta-cryptoxanthin concentration was negligible in the SC-CO(2) extracts, for all the investigated state conditions. The maximum recovery of carotenoids was obtained at 60 degrees C and 250 bar, extracting 55% of the total carotenoid content, 74% of the rubixanthin and 78% of the lycopene from the pulp. Under these state conditions, the total carotenoid concentration in the extract was 5474 mu g/g, represented by 66% lycopene and 32% rubixanthin. The experimental state conditions produced different SC-CO(2) extracts with respect to the extraction yield and concentration of different carotenoids, indicating that the supercritical carbon dioxide was selective in the extraction of the pitanga carotenoids as a function of temperature and pressure. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this work was to evaluate thermodynamic parameters of the soybean oil extraction process using ethanol as solvent. The experimental treatments were as follows: aqueous solvents with water contents varying from 0 to 13% (mass basis) and extraction temperature varying from 50 to 100 degrees C. The distribution coefficients of oil at equilibrium have been used to calculate enthalpy, entropy and free energy changes. The results indicate that oil extraction process with ethanol is feasible and spontaneous, mainly under higher temperature. Also, the influence of water level in the solvent and temperature were analysed using the response surface methodology (RSM). It can be noted that the extraction yield was highly affected by both independent variables. A joint analysis of thermodynamic and RSM indicates the optimal level of solvent hydration and temperature to perform the extraction process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to idiosyncrasies in their syntax, semantics or frequency, Multiword Expressions (MWEs) have received special attention from the NLP community, as the methods and techniques developed for the treatment of simplex words are not necessarily suitable for them. This is certainly the case for the automatic acquisition of MWEs from corpora. A lot of effort has been directed to the task of automatically identifying them, with considerable success. In this paper, we propose an approach for the identification of MWEs in a multilingual context, as a by-product of a word alignment process, that not only deals with the identification of possible MWE candidates, but also associates some multiword expressions with semantics. The results obtained indicate the feasibility and low costs in terms of tools and resources demanded by this approach, which could, for example, facilitate and speed up lexicographic work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a flexible technique for interactive exploration of vector field data through classification derived from user-specified feature templates. Our method is founded on the observation that, while similar features within the vector field may be spatially disparate, they share similar neighborhood characteristics. Users generate feature-based visualizations by interactively highlighting well-accepted and domain specific representative feature points. Feature exploration begins with the computation of attributes that describe the neighborhood of each sample within the input vector field. Compilation of these attributes forms a representation of the vector field samples in the attribute space. We project the attribute points onto the canonical 2D plane to enable interactive exploration of the vector field using a painting interface. The projection encodes the similarities between vector field points within the distances computed between their associated attribute points. The proposed method is performed at interactive rates for enhanced user experience and is completely flexible as showcased by the simultaneous identification of diverse feature types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual representations of isosurfaces are ubiquitous in the scientific and engineering literature. In this paper, we present techniques to assess the behavior of isosurface extraction codes. Where applicable, these techniques allow us to distinguish whether anomalies in isosurface features can be attributed to the underlying physical process or to artifacts from the extraction process. Such scientific scrutiny is at the heart of verifiable visualization - subjecting visualization algorithms to the same verification process that is used in other components of the scientific pipeline. More concretely, we derive formulas for the expected order of accuracy (or convergence rate) of several isosurface features, and compare them to experimentally observed results in the selected codes. This technique is practical: in two cases, it exposed actual problems in implementations. We provide the reader with the range of responses they can expect to encounter with isosurface techniques, both under ""normal operating conditions"" and also under adverse conditions. Armed with this information - the results of the verification process - practitioners can judiciously select the isosurface extraction technique appropriate for their problem of interest, and have confidence in its behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work describes a novel methodology for automatic contour extraction from 2D images of 3D neurons (e.g. camera lucida images and other types of 2D microscopy). Most contour-based shape analysis methods cannot be used to characterize such cells because of overlaps between neuronal processes. The proposed framework is specifically aimed at the problem of contour following even in presence of multiple overlaps. First, the input image is preprocessed in order to obtain an 8-connected skeleton with one-pixel-wide branches, as well as a set of critical regions (i.e., bifurcations and crossings). Next, for each subtree, the tracking stage iteratively labels all valid pixel of branches, tip to a critical region, where it determines the suitable direction to proceed. Finally, the labeled skeleton segments are followed in order to yield the parametric contour of the neuronal shape under analysis. The reported system was successfully tested with respect to several images and the results from a set of three neuron images are presented here, each pertaining to a different class, i.e. alpha, delta and epsilon ganglion cells, containing a total of 34 crossings. The algorithms successfully got across all these overlaps. The method has also been found to exhibit robustness even for images with close parallel segments. The proposed method is robust and may be implemented in an efficient manner. The introduction of this approach should pave the way for more systematic application of contour-based shape analysis methods in neuronal morphology. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coal mining and incineration of solid residues of health services (SRHS) generate several contaminants that are delivered into the environment, such as heavy metals and dioxins. These xenobiotics can lead to oxidative stress overgeneration in organisms and cause different kinds of pathologies, including cancer. In the present study the concentrations of heavy metals such as lead, copper, iron, manganese and zinc in the urine, as well as several enzymatic and non-enzymatic biomarkers of oxidative stress in the blood (contents of lipoperoxidation = TBARS, protein carbonyls = PC, protein thiols = PT, alpha-tocopherol = AT, reduced glutathione = GSH, and the activities of glutathione S-transferase = GST, glutathione reductase = GR, glutathione peroxidase = GPx, catalase = CAT and superoxide dismutase = SOD), in the blood of six different groups (n = 20 each) of subjects exposed to airborne contamination related to coal mining as well as incineration of solid residues of health services (SRHS) after vitamin E (800 mg/day) and vitamin C (500 mg/day) supplementation during 6 months, which were compared to the situation before the antioxidant intervention (Avila et al., Ecotoxicology 18:1150-1157, 2009; Possamai et al., Ecotoxicology 18:1158-1164, 2009). Except for the decreased manganese contents, heavy metal concentrations were elevated in all groups exposed to both sources of airborne contamination when compared to controls. TBARS and PC concentrations, which were elevated before the antioxidant intervention decreased after the antioxidant supplementation. Similarly, the contents of PC, AT and GSH, which were decreased before the antioxidant intervention, reached values near those found in controls, GPx activity was reestablished in underground miners, and SOD, CAT and GST activities were reestablished in all groups. The results showed that the oxidative stress condition detected previously to the antioxidant supplementation in both directly and indirectly subjects exposed to the airborne contamination from coal dusts and SRHS incineration, was attenuated after the antioxidant intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work cassava bagasse, a by-product of cassava starch industrialization was investigated as a new raw material to extract cellulose whiskers. This by-product is basically constituted of cellulose fibers (17.5 wt%) and residual starch (82 wt%). Therefore, this residue contains both natural fibers and a considerable quantity of starch and this composition suggests the possibility of using cassava bagasse to prepare both starch nanocrystals and cellulose whiskers. In this way, the preparation of cellulose whiskers was investigated employing conditions of sulfuric acid hydrolysis treatment found in the literature. The ensuing materials were characterized by transmission electron microscopy (TEM) and X-ray diffraction experiments. The results showed that high aspect ratio cellulose whiskers were successfully obtained. The reinforcing capability of cellulose whiskers extracted from cassava bagasse was investigated using natural rubber as matrix. High mechanical properties were observed from dynamic mechanical analysis. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a novel polydimethylsiloxane/activated carbon (PDMS-ACB) material is proposed as a new polymeric phase for stir bar sorptive extraction (SBSE). The PDMS-ACB stir bar, assembled using a simple Teflon (R)/glass capillary mold, demonstrated remarkable stability and resistance to organic solvents for more than 150 extractions. The SBSE bar has a diameter of 2.36 mm and a length of 2.2 cm and is prepared to contain 92 mu L of polymer coating. This new PDMS-ACB bar was evaluated for its ability to determine the quantity of pesticides in sugarcane juice samples by performing liquid desorption (LD) in 200 mu L of ethyl acetate and analyzing the solvent through gas chromatography coupled with mass spectrometry (GC-MS). A fractional factorial design was used to evaluate the main parameters involved in the extraction procedure. Then, a central composite design with a star configuration was used to optimize the significant extraction parameters. The method used demonstrated a limit of quantification (LOQ) of 0.5-40 mu g/L, depending on the analyte detected; the amount of recovery varied from 0.18 to 49.50%, and the intraday precision ranged from 0.072 to 8.40%. The method was used in the analysis of real sugarcane juice samples commercially available in local markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a method employing stir bar sorptive extraction (SBSE) with in situ derivatization, in combination with either thermal or liquid desorption on-line coupled to gas chromatography-mass spectrometry for the analysis of fluoxetine in plasma samples. Ethyl chloroformate was employed as derivatizing agent producing symmetrical peaks. Parameters such as solvent polarity, time for analyte desorption, and extraction time, were evaluated. During the validation process, the developed method presented specificity, linearity (R-2 > 0.99), precision (R.S.D. < 15%), and limits of quantification (LOQ) of 30 and 1.37 pg mL(-1), when liquid and thermal desorption were employed, respectively. This simple and highly sensitive method showed to be adequate for the measurement-of fluoxetine in typical and trace concentration levels. (c) 2008 Elsevier B.V. All rights reserved.