998 resultados para Feature detector


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of feature selection in a nonparametric unsupervised learning environment is practically undeveloped because no true measure for the effectiveness of a feature exists in such an environment. The lack of a feature selection phase preceding the clustering process seriously affects the reliability of such learning. New concepts such as significant features, level of significance of features, and immediate neighborhood are introduced which result in meeting implicitly the need for feature slection in the context of clustering techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents two algorithms for smoothing and feature extraction for fingerprint classification. Deutsch's(2) Thinning algorithm (rectangular array) is used for thinning the digitized fingerprint (binary version). A simple algorithm is also suggested for classifying the fingerprints. Experimental results obtained using such algorithms are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training for bodybuilding competition is clearly a serious business that inflicts serious demands on the competitor. Not only did Francis commit time and money to compete, but he also arguably put winning before his physical well-being—enduring pain and suffering from his injury. Bodybuilding may seem like an extreme example, but it is not the only activity in which people suffer in pursuit of their goals. Boxers fight each other in the ring; soccer players risk knee and ankle injuries, sometimes playing despite being hurt; and mountaineers risk their lives in dangerous climbs. In the arts there are many examples of people suffering to achieve their goals: Beethoven kept composing, conducting, and performing despite his hearing loss; van Gogh grappled with depression but kept painting, finding fame only posthumously; and Mozart lived the final years of his life impoverished but still composing. These examples show that many great achievements come at a price: severe suffering...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents 'vSpeak', the first initiative taken in Pakistan for ICT enabled conversion of dynamic Sign Urdu gestures into natural language sentences. To realize this, vSpeak has adopted a novel approach for feature extraction using edge detection and image compression which gives input to the Artificial Neural Network that recognizes the gesture. This technique caters for the blurred images as well. The training and testing is currently being performed on a dataset of 200 patterns of 20 words from Sign Urdu with target accuracy of 90% and above.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generating discriminative input features is a key requirement for achieving highly accurate classifiers. The process of generating features from raw data is known as feature engineering and it can take significant manual effort. In this paper we propose automated feature engineering to derive a suite of additional features from a given set of basic features with the aim of both improving classifier accuracy through discriminative features, and to assist data scientists through automation. Our implementation is specific to HTTP computer network traffic. To measure the effectiveness of our proposal, we compare the performance of a supervised machine learning classifier built with automated feature engineering versus one using human-guided features. The classifier addresses a problem in computer network security, namely the detection of HTTP tunnels. We use Bro to process network traffic into base features and then apply automated feature engineering to calculate a larger set of derived features. The derived features are calculated without favour to any base feature and include entropy, length and N-grams for all string features, and counts and averages over time for all numeric features. Feature selection is then used to find the most relevant subset of these features. Testing showed that both classifiers achieved a detection rate above 99.93% at a false positive rate below 0.01%. For our datasets, we conclude that automated feature engineering can provide the advantages of increasing classifier development speed and reducing development technical difficulties through the removal of manual feature engineering. These are achieved while also maintaining classification accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a novel method for human activity segmentation and interpretation in surveillance applications based on Gabor filter-bank features. A complex human activity is modeled as a sequence of elementary human actions like walking, running, jogging, boxing, hand-waving etc. Since human silhouette can be modeled by a set of rectangles, the elementary human actions can be modeled as a sequence of a set of rectangles with different orientations and scales. The activity segmentation is based on Gabor filter-bank features and normalized spectral clustering. The feature trajectories of an action category are learnt from training example videos using dynamic time warping. The combined segmentation and the recognition processes are very efficient as both the algorithms share the same framework and Gabor features computed for the former can be used for the later. We have also proposed a simple shadow detection technique to extract good silhouette which is necessary for good accuracy of an action recognition technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acute knee injury is a common event throughout life, and it is usually the result of a traffic accident, simple fall, or twisting injury. Over 90% of patients with acute knee injury undergo radiography. An overlooked fracture or delayed diagnosis can lead to poor patient outcome. The major aim of this thesis was retrospectively to study imaging of knee injury with a special focus on tibial plateau fractures in patients referred to a level-one trauma center. Multi-detector computed tomography (MDCT) findings of acute knee trauma were studied and compared to radiography, as well as whether non-contrast MDCT can detect cruciate ligaments with reasonable accuracy. The prevalence, type, and location of meniscal injuries in magnetic resonance imaging (MRI) were evaluated, particularly in order to assess the prevalence of unstable meniscal tears in acute knee trauma with tibial plateau fractures. The possibility to analyze with conventional MRI the signal appearance of menisci repaired with bioabsorbable arrows was also studied. The postoperative use of MDCT was studied in surgically treated tibial plateau fractures: to establish the frequency and indications of MDCT and to assess the common findings and their clinical impact in a level-one trauma hospital. This thesis focused on MDCT and MRI of knee injuries, and radiographs were analyzed when applica-ble. Radiography constitutes the basis for imaging acute knee injury, but MDCT can yield information beyond the capabilities of radiography. Especially in severely injured patients , sufficient radiographs are often difficult to obtain, and in those patients, radiography is unreliable to rule out fractures. MDCT detected intact cruciate ligaments with good specificity, accuracy, and negative predictive value, but the assessment of torn ligaments was unreliable. A total of 36% (14/39) patients with tibial plateau fracture had an unstable meniscal tear in MRI. When a meniscal tear is properly detected preoperatively, treatment can be combined with primary fracture fixation, thus avoiding another operation. The number of meniscal contusions was high. Awareness of the imaging features of this meniscal abnormality can help radiologists increase specificity by avoiding false-positive findings in meniscal tears. Postoperative menisci treated with bioabsorbable arrows showed no difference, among different signal intensities in MRI, among menisci between patients with operated or intact ACL. The highest incidence of menisci with an increased signal intensity extending to the meniscal surface was in patients whose surgery was within the previous 18 months. The results may indicate that a rather long time is necessary for menisci to heal completely after arrow repair. Whether the menisci with an increased signal intensity extending to the meniscal surface represent improper healing or re-tear, or whether this is just the earlier healing feature in the natural process remains unclear, and further prospective studies are needed to clarify this. Postoperative use of MDCT in tibial plateau fractures was rather infrequent even in this large trauma center, but when performed, it revealed clinically significant information, thus benefitting patients in regard to treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increased availability of image capturing devices has enabled collections of digital images to rapidly expand in both size and diversity. This has created a constantly growing need for efficient and effective image browsing, searching, and retrieval tools. Pseudo-relevance feedback (PRF) has proven to be an effective mechanism for improving retrieval accuracy. An original, simple yet effective rank-based PRF mechanism (RB-PRF) that takes into account the initial rank order of each image to improve retrieval accuracy is proposed. This RB-PRF mechanism innovates by making use of binary image signatures to improve retrieval precision by promoting images similar to highly ranked images and demoting images similar to lower ranked images. Empirical evaluations based on standard benchmarks, namely Wang, Oliva & Torralba, and Corel datasets demonstrate the effectiveness of the proposed RB-PRF mechanism in image retrieval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Guo and Nixon proposed a feature selection method based on maximizing I(x; Y),the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x; Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x; Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x; Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x; Y).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A search for new physics using three-lepton (trilepton) data collected with the CDF II detector and corresponding to an integrated luminosity of 976 pb-1 is presented. The standard model predicts a low rate of trilepton events, which makes some supersymmetric processes, such as chargino-neutralino production, measurable in this channel. The mu+mu+l signature is investigated, where l is an electron or a muon, with the additional requirement of large missing transverse energy. In this analysis, the lepton transverse momenta with respect to the beam direction (pT) are as low as 5 GeV/c, a selection that improves the sensitivity to particles which are light as well as to ones which result in leptonically decaying tau leptons. At the same time, this low-p_T selection presents additional challenges due to the non-negligible heavy-quark background at low lepton momenta. This background is measured with an innovative technique using experimental data. Several dimuon and trilepton control regions are investigated, and good agreement between experimental results and standard-model predictions is observed. In the signal region, we observe one three-muon event and expect 0.4+/-0.1 mu+mu+l events

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.