64 resultados para Feature extraction and classification


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usefulness of motor subtypes of delirium is unclear due to inconsistency in subtyping methods and a lack of validation with objective measures of activity. The activity of 40 patients was measured over 24 h with a discrete accelerometer-based activity monitor. The continuous wavelet transform (CWT) with various mother wavelets were applied to accelerometry data from three randomly selected patients with DSM-IV delirium that were readily divided into hyperactive, hypoactive, and mixed motor subtypes. A classification tree used the periods of overall movement as measured by the discrete accelerometer-based monitor as determining factors for which to classify these delirious patients. This data used to create the classification tree were based upon the minimum, maximum, standard deviation, and number of coefficient values, generated over a range of scales by the CWT. The classification tree was subsequently used to define the remaining motoric subtypes. The use of a classification system shows how delirium subtypes can be categorized in relation to overall motoric behavior. The classification system was also implemented to successfully define other patient motoric subtypes. Motor subtypes of delirium defined by observed ward behavior differ in electronically measured activity levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work presents a new method for activity extraction and reporting from video based on the aggregation of fuzzy relations. Trajectory clustering is first employed mainly to discover the points of entry and exit of mobiles appearing in the scene. In a second step, proximity relations between resulting clusters of detected mobiles and contextual elements from the scene are modeled employing fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the structure of the scene and characterises the ongoing different activities of the scene. Discovered activity zones can be reported as activity maps with different granularities thanks to the analysis of the transitive closure matrix. Taking advantage of the soft relation properties, activity zones and related activities can be labeled in a more human-like language. We present results obtained on real videos corresponding to apron monitoring in the Toulouse airport in France.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The total phenols, apigenin 7-glucoside, turbidity and colour of extracts from dried chamomile flowers were studied with a view to develop chamomile extracts with potential anti-inflammatory properties for incorporation into beverages. The extraction of all constituents followed pseudo first-order kinetics. In general, the rate constant (k) increased as the temperature increased from 57 to 100 °C. The turbidity only increased significantly between 90 and 100 °C. Therefore, aqueous chamomile extracts had maximum total phenol concentration and minimum turbidity when extracted at 90 °C for 20 min. The effect of drying conditions on chamomile extracted using these conditions was determined. A significant reduction in phenol concentration, from 19.7 ± 0.5 mg/g GAE in fresh chamomile to 13 ± 1 mg/g GAE, was found only in the plant material oven-dried at 80 °C (p ⩽ 0.05). The biggest colour change was between fresh chamomile and that oven-dried at 80 °C, followed by samples air-dried. There was no significant difference in colour of material freeze-dried and oven-dried at 40 °C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In The Conduct of Inquiry in International Relations, Patrick Jackson situates methodologies in International Relations in relation to their underlying philosophical assumptions. One of his aims is to map International Relations debates in a way that ‘capture[s] current controversies’ (p. 40). This ambition is overstated: whilst Jackson’s typology is useful as a clarificatory tool, (re)classifying existing scholarship in International Relations is more problematic. One problem with Jackson’s approach is that he tends to run together the philosophical assumptions which decisively differentiate his methodologies (by stipulating a distinctive warrant for knowledge claims) and the explanatory strategies that are employed to generate such knowledge claims, suggesting that the latter are entailed by the former. In fact, the explanatory strategies which Jackson associates with each methodology reflect conventional practice in International Relations just as much as they reflect philosophical assumptions. This makes it more difficult to identify each methodology at work than Jackson implies. I illustrate this point through a critical analysis of Jackson’s controversial reclassification of Waltz as an analyticist, showing that whilst Jackson’s typology helps to expose inconsistencies in Waltz’s approach, it does not fully support the proposed reclassification. The conventional aspect of methodologies in International Relations also raises questions about the limits of Jackson’s ‘engaged pluralism’.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Immunodiagnostic microneedles provide a novel way to extract protein biomarkers from the skin in a minimally invasive manner for analysis in vitro. The technology could overcome challenges in biomarker analysis specifically in solid tissue, which currently often involves invasive biopsies. This study describes the development of a multiplex immunodiagnostic device incorporating mechanisms to detect multiple antigens simultaneously, as well as internal assay controls for result validation. A novel detection method is also proposed. It enables signal detection specifically at microneedle tips and therefore may aid the construction of depth profiles of skin biomarkers. The detection method can be coupled with computerised densitometry for signal quantitation. The antigen specificity, sensitivity and functional stability of the device were assessed against a number of model biomarkers. Detection and analysis of endogenous antigens (interleukins 1α and 6) from the skin using the device was demonstrated. The results were verified using conventional enzyme-linked immunosorbent assays. The detection limit of the microneedle device, at ≤10 pg/mL, was at least comparable to conventional plate-based solid-phase enzyme immunoassays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parkinson is a neurodegenerative disease, in which tremor is the main symptom. This paper investigates the use of different classification methods to identify tremors experienced by Parkinsonian patients.Some previous research has focussed tremor analysis on external body signals (e.g., electromyography, accelerometer signals, etc.). Our advantage is that we have access to sub-cortical data, which facilitates the applicability of the obtained results into real medical devices since we are dealing with brain signals directly. Local field potentials (LFP) were recorded in the subthalamic nucleus of 7 Parkinsonian patients through the implanted electrodes of a deep brain stimulation (DBS) device prior to its internalization. Measured LFP signals were preprocessed by means of splinting, down sampling, filtering, normalization and rec-tification. Then, feature extraction was conducted through a multi-level decomposition via a wavelettrans form. Finally, artificial intelligence techniques were applied to feature selection, clustering of tremor types, and tremor detection.The key contribution of this paper is to present initial results which indicate, to a high degree of certainty, that there appear to be two distinct subgroups of patients within the group-1 of patients according to the Consensus Statement of the Movement Disorder Society on Tremor. Such results may well lead to different resultant treatments for the patients involved, depending on how their tremor has been classified. Moreover, we propose a new approach for demand driven stimulation, in which tremor detection is also based on the subtype of tremor the patient has. Applying this knowledge to the tremor detection problem, it can be concluded that the results improve when patient clustering is applied prior to detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the extraction properties of a synergistic system consisting of 2,6-bis-(benzoxazolyl)-4-dodecyloxylpyridine (BODO) and 2-bromodecanoic acid (HA) in tert-butyl benzene (TBB) have been investigated as a function of ionic strength by varying the nitrate ion and perchlorate ion concentrations. The influence of the hydrogen ion concentration has also been investigated. Distribution ratios between 0.03-12 and 0.003-0.8 have been found for Am(III) and Eu(HI), respectively, but there were no attempts to maximize these values. It has been shown that the distribution ratios decrease with increasing amounts of ClO4-, NO3-, and H+. The mechanisms, however, by which the decrease occurs, are different. In the case of increasing perchlorate ion concentration, the decrease in extraction is linear in a log-log plot of the distribution ratio vs. the ionic strength, while in the nitrate case the complexation between nitrate and Am or Eu increases at high nitrate ion concentrations and thereby decreases the distribution ratio in a non-linearway. The decrease in extraction could be caused by changes in activity coefficients that can be explained with specific ion interaction theory (SIT); shielding of the metal ions, and by nitrate complexation with Am and Eu as competing mechanism at high ionic strengths. The separation factor between Am and Eu reaches a maximum at similar to1 M nitrate ion concentration. Thereafter the values decrease with increasing nitrate ion concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tetradentate ligand (C-5-BTBP) was able to extract americium(III) selectively from nitric acid. In octanol/kerosene the distribution ratios suggest that stripping will be possible. C-5-BTBP has unusual properties and potentially offers a means of separating metals, which otherwise are difficult to separate. For example C-5-BTBP has the potential to separate paliadium(II) from a mixture containing rhodium(III) and ruthenium(H) nitrosyl. In addition, C-5-BTBP has the potential to remove traces of cadmium from effluent or from solutions of other metals contaminated with cadmium. C-5-BTBP has potential as a reagent for the separation of americium(III) from solutions contaminated with iron(III) and nickel(II), hence offering a means of concentrating americium(III) for analytical purposes from nitric acid solutions containing high concentrations of iron(III) or nickel(II).