837 resultados para semi binary based feature detectordescriptor


Relevância:

50.00% 50.00%

Publicador:

Resumo:

A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A new semi-implicit stress integration algorithm for finite strain plasticity (compatible with hyperelas- ticity) is introduced. Its most distinctive feature is the use of different parameterizations of equilibrium and reference configurations. Rotation terms (nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the reference configuration. In contrast, relative Green–Lagrange strains (which are quadratic in terms of displacements) represent the equilibrium configuration implicitly. In addition, the adequacy of several objective stress rates in the semi-implicit context is studied. We para- metrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use coinciding configurations. A single constitutive framework provides quantities needed by common discretization schemes. This is computationally convenient and robust, as all elements only need to provide pre-established quantities irrespectively of the constitutive model. In this work, mixed strain/stress control is used, as well as our smoothing algorithm for the complemen- tarity condition. Exceptional time-step robustness is achieved in elasto-plastic problems: often fewer than one-tenth of the typical number of time increments can be used with a quantifiable effect in accuracy. The proposed algorithm is general: all hyperelastic models and all classical elasto-plastic models can be employed. Plane-stress, Shell and 3D examples are used to illustrate the new algorithm. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Semi-interpenetrating networks (Semi-IPNs) with different compositions were prepared from poly(dimethylsiloxane) (PDMS), tetraethylorthosilicate (TEOS), and poly (vinyl alcohol) (PVA) by the sol-gel process in this study. The characterization of the PDMS/PVA semi-IPN was carried out using Fourier transform infrared spectroscopy (FTIR), thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), scanning electron microscopy (SEM), and swelling measurements. The presence of PVA domains dispersed in the PDMS network disrupted the network and allowed PDMS to crystallize, as observed by the crystallization and melting peaks in the DSC analyses. Because of the presence of hydrophilic (-OH) and hydrophobic (Si-(CH(3))(2)) domains, there was an appropriate hydrophylic/hydrophobic balance in the semi-IPNs prepared, which led to a maximum equilibrium water content of similar to 14 wt % without a loss in the ability to swell less polar solvents. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 115: 158-166, 2010

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this brief, a read-only-memoryless structure for binary-to-residue number system (RNS) conversion modulo {2(n) +/- k} is proposed. This structure is based only on adders and constant multipliers. This brief is motivated by the existing {2(n) +/- k} binary-to-RNS converters, which are particular inefficient for larger values of n. The experimental results obtained for 4n and 8n bits of dynamic range suggest that the proposed conversion structures are able to significantly improve the forward conversion efficiency, with an AT metric improvement above 100%, regarding the related state of the art. Delay improvements of 2.17 times with only 5% area increase can be achieved if a proper selection of the {2(n) +/- k} moduli is performed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Feature discretization (FD) techniques often yield adequate and compact representations of the data, suitable for machine learning and pattern recognition problems. These representations usually decrease the training time, yielding higher classification accuracy while allowing for humans to better understand and visualize the data, as compared to the use of the original features. This paper proposes two new FD techniques. The first one is based on the well-known Linde-Buzo-Gray quantization algorithm, coupled with a relevance criterion, being able perform unsupervised, supervised, or semi-supervised discretization. The second technique works in supervised mode, being based on the maximization of the mutual information between each discrete feature and the class label. Our experimental results on standard benchmark datasets show that these techniques scale up to high-dimensional data, attaining in many cases better accuracy than existing unsupervised and supervised FD approaches, while using fewer discretization intervals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Eradication of code smells is often pointed out as a way to improve readability, extensibility and design in existing software. However, code smell detection remains time consuming and error-prone, partly due to the inherent subjectivity of the detection processes presently available. In view of mitigating the subjectivity problem, this dissertation presents a tool that automates a technique for the detection and assessment of code smells in Java source code, developed as an Eclipse plugin. The technique is based upon a Binary Logistic Regression model that uses complexity metrics as independent variables and is calibrated by expert‟s knowledge. An overview of the technique is provided, the tool is described and validated by an example case study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O objetivo deste trabalho é apresentar os resultados da análise das concepções de dois protagonistas de uma reforma curricular que está sendo implementada numa escola de engenharia. A principal característica do novo currículo é o uso de projetos e oficinas como atividades complementares a serem realizadas pelos estudantes. As atividades complementares acontecerão em paralelo ao trabalho realizado nas disciplinas sem que haja uma relação de interdisciplinaridade. O novo currículo está sendo implantado desde fevereiro de 2015. Segundo Pacheco (2005) há dois momentos, dentre outros, no processo de mudança curricular, o currículo “ideal”, determinado por dimensões epistemológica, política, econômica, ideológica, técnica, estética, e histórica e, que recebe influência direta daquele que idealiza e cria o novo currículo e, o currículo “formal” que se traduz na prática implementada na escola. São essas duas etapas estudadas nesta pesquisa. Para isso serão considerados como fontes de dados dois protagonistas, um mais ligado à concepção do currículo e outro da sua implementação, a partir dos quais se busca compreender as motivações, crenças e percepções que, por sua vez, determinam a reforma curricular. Entrevistas semiestruturadas foram utilizadas como técnica de pesquisa, com o propósito de se entender a gênese da proposta e as mudanças entre essas duas etapas. Os dados revelam que mudanças aconteceram desde a idealização até a formalização do currículo, motivadas por demandas do processo de implementação, revela ainda diferenças na visão de currículo e a motivação para romper com padrões na formação de engenheiros no Brasil.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

...In dieser Arbeit untersuche ich den ”Fluch der Dimensionen” mittels dem Begriff der Distanzkonzentration. Ich zeige, dass dieser Effekt im Datenmodell mittels der paarweisen Kovarianzkoeffizienten der Randverteilungen beschrieben werden kann. Zusätzlich vergleiche ich 10 prototypbasierte Clusteralgorithmen mittels 800.000 Clusterergebnissen von künstlich erzeugten Datensätzen. Ich erforsche, wie und warum Clusteralgorithmen von der Anzahl der Merkmale beeinflusst werden. Mit den Clusterergebnissen untersuche ich außerdem, wie gut 5 der populärsten Clusterqualitätsmaße die tatsächliche Clusterqualität schätzen.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aquest treball és una revisió d'alguns sistemes de Traducció Automàtica que segueixen l'estratègia de Transfer i fan servir estructures de trets com a eina de representació. El treball s'integra dins el projecte MLAP-9315, projecte que investiga la reutilització de les especificacions lingüístiques del projecte EUROTRA per estàndards industrials.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several features that can be extracted from digital images of the sky and that can be useful for cloud-type classification of such images are presented. Some features are statistical measurements of image texture, some are based on the Fourier transform of the image and, finally, others are computed from the image where cloudy pixels are distinguished from clear-sky pixels. The use of the most suitable features in an automatic classification algorithm is also shown and discussed. Both the features and the classifier are developed over images taken by two different camera devices, namely, a total sky imager (TSI) and a whole sky imager (WSC), which are placed in two different areas of the world (Toowoomba, Australia; and Girona, Spain, respectively). The performance of the classifier is assessed by comparing its image classification with an a priori classification carried out by visual inspection of more than 200 images from each camera. The index of agreement is 76% when five different sky conditions are considered: clear, low cumuliform clouds, stratiform clouds (overcast), cirriform clouds, and mottled clouds (altocumulus, cirrocumulus). Discussion on the future directions of this research is also presented, regarding both the use of other features and the use of other classification techniques