56 resultados para Texture Feature
Resumo:
Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.
Resumo:
Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.
Resumo:
Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
Fractal with microscopic anisotropy shows a unique type of macroscopic isotropy restoration phenomenon that is absent in Euclidean space [M. T. Barlow et al., Phys. Rev. Lett. 75, 3042]. In this paper the isotropy restoration feature is considered for a family of two-dimensional Sierpinski gasket type fractal resistor networks. A parameter xi is introduced to describe this phenomenon. Our numerical results show that xi satisfies the scaling law xi similar to l(-alpha), where l is the system size and alpha is an exponent independent of the degree of microscopic anisotropy, characterizing the isotropy restoration feature of the fractal systems. By changing the underlying fractal structure towards the Euclidean triangular lattice through increasing the side length b of the gasket generators, the fractal-to-Euclidean crossover behavior of the isotropy restoration feature is discussed.
Resumo:
This paper describes a new approach to detect and track maritime objects in real time. The approach particularly addresses the highly dynamic maritime environment, panning cameras, target scale changes, and operates on both visible and thermal imagery. Object detection is based on agglomerative clustering of temporally stable features. Object extents are first determined based on persistence of detected features and their relative separation and motion attributes. An explicit cluster merging and splitting process handles object creation and separation. Stable object clus- ters are tracked frame-to-frame. The effectiveness of the approach is demonstrated on four challenging real-world public datasets.
Resumo:
Multispectral iris recognition uses information from multiple bands of the electromagnetic spectrum to better represent certain physiological characteristics of the iris texture and enhance obtained recognition accuracy. This paper addresses the questions of single versus cross spectral performance and compares score-level fusion accuracy for different feature types, combining different wavelengths to overcome limitations in less constrained recording environments. Further it is investigated whether Doddington's “goats” (users who are particularly difficult to recognize) in one spectrum also extend to other spectra. Focusing on the question of feature stability at different wavelengths, this work uses manual ground truth segmentation, avoiding bias by segmentation impact. Experiments on the public UTIRIS multispectral iris dataset using 4 feature extraction techniques reveal a significant enhancement when combining NIR + Red for 2-channel and NIR + Red + Blue for 3-channel fusion, across different feature types. Selective feature-level fusion is investigated and shown to improve overall and especially cross-spectral performance without increasing the overall length of the iris code.
Resumo:
Contemporary US sitcom is at an interesting crossroads: it has received an increasing amount of scholarly attention (e.g. Mills 2009; Butler 2010; Newman and Levine 2012; Vermeulen and Whitfield 2013), which largely understands it as shifting towards the aesthetically and narratively complex. At the same time, in the post-broadcasting era, US networks are particularly struggling for their audience share. With the days of blockbuster successes like Must See TV’s Friends (NBC 1994-2004) a distant dream, recent US sitcoms are instead turning towards smaller, engaged audiences. Here, a cult sensibility of intertextual in-jokes, temporal and narrational experimentation (e.g. flashbacks and alternate realities) and self-reflexive performance styles have marked shows including Community (NBC 2009-2015), How I Met Your Mother (CBS 2005-2014), New Girl (Fox 2011-present) and 30 Rock (NBC 2006-2013). However, not much critical attention has so far been paid to how these developments in textual sensibility in contemporary US sitcom may be influenced by, and influencing, the use of transmedia storytelling practices, an increasingly significant industrial concern and rising scholarly field of enquiry (e.g. Jenkins 2006; Mittell 2015; Richards 2010; Scott 2010; Jenkins, Ford and Green 2013). This chapter investigates this mutual influence between sitcom and transmedia by taking as its case studies two network shows that encourage invested viewership through their use of transtexts, namely How I Met Your Mother (hereafter HIMHM) and New Girl (hereafter NG). As such, it will pay particular attention to the most transtextually visible character/actor from each show: HIMYM’s Barney Stinson, played by Neil Patrick Harris, and NG’s Schmidt, played by Max Greenfield. This chapter argues that these sitcoms do not simply have their particular textual sensibility and also (happen to) engage with transmedia practices, but that the two are mutually informing and defining. This chapter explores the relationships and interplay between sitcom aesthetics, narratives and transmedia storytelling (or industrial transtexts), focusing on the use of multiple delivery channels in order to disperse “integral elements of a fiction” (Jenkins, 2006 95-6), by official entities such as the broadcasting channels. The chapter pays due attention to the specific production contexts of both shows and how these inform their approaches to transtexts. This chapter’s conceptual framework will be particularly concerned with how issues of texture, the reality envelope and accepted imaginative realism, as well as performance and the actor’s input inform and illuminate contemporary sitcoms and transtexts, and will be the first scholarly research to do so. It will seek out points of connections between two (thus far) separate strands of scholarship and will move discussions on transtexts beyond the usual genre studied (i.e. science-fiction and fantasy), as well as make a contribution to the growing scholarship on contemporary sitcom by approaching it from a new critical angle. On the basis that transmedia scholarship stands to benefit from widening its customary genre choice (i.e. telefantasy) for its case studies and from making more use of in-depth close analysis in its engagement with transtexts, the chapter argues that notions of texture, accepted imaginative realism and the reality envelope, as well as performance and the actor’s input deserve to be paid more attention to within transtext-related scholarship.