27 resultados para Feature taxonomy
Resumo:
Ontologies have become widely accepted as the main method for representing knowledge in Knowledge Management (KM) applica-tions. Given the continuous and rapid change and dynamic nature of knowledge in all fields, automated methods for construct-ing ontologies are of great importance. All ontologies or taxonomies currently in use have been hand built and require consider-able manpower to keep up to date. Taxono-mies are less logically rigorous than ontolo-gies, and in this paper we consider the re-quirements for a system which automatically constructed taxonomies. There are a number of potentially useful methods for construct-ing hierarchically organised concepts from a collection of texts and there are a number of automatic methods which permit one to as-sociate one word with another. The impor-tant issue for the successful development of this research area is to identify techniques for labelling the relation between two candi-date terms, if one exists. We consider a number of possible approaches and argue that the majority are unsuitable for our re-quirements.
Resumo:
A practical Bayesian approach for inference in neural network models has been available for ten years, and yet it is not used frequently in medical applications. In this chapter we show how both regularisation and feature selection can bring significant benefits in diagnostic tasks through two case studies: heart arrhythmia classification based on ECG data and the prognosis of lupus. In the first of these, the number of variables was reduced by two thirds without significantly affecting performance, while in the second, only the Bayesian models had an acceptable accuracy. In both tasks, neural networks outperformed other pattern recognition approaches.
Resumo:
Data visualization algorithms and feature selection techniques are both widely used in bioinformatics but as distinct analytical approaches. Until now there has been no method of measuring feature saliency while training a data visualization model. We derive a generative topographic mapping (GTM) based data visualization approach which estimates feature saliency simultaneously with the training of the visualization model. The approach not only provides a better projection by modeling irrelevant features with a separate noise model but also gives feature saliency values which help the user to assess the significance of each feature. We compare the quality of projection obtained using the new approach with the projections from traditional GTM and self-organizing maps (SOM) algorithms. The results obtained on a synthetic and a real-life chemoinformatics dataset demonstrate that the proposed approach successfully identifies feature significance and provides coherent (compact) projections. © 2006 IEEE.
Resumo:
There have been two main approaches to feature detection in human and computer vision - luminance-based and energy-based. Bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of elements in a 3-element contour-alignment task? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square wave and Fourier components in a given image have a common phase. Observers judged whether the centre element (eg ±458 phase) was to the left or right of the flanking pair (eg 0º phase). Lateral offset of the centre element was varied to find the point of subjective alignment from the fitted psychometric function. This point shifted systematically to the left or right according to the sign of the centre phase, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks and other derivative-based features, but not by energy peaks which (by design) predicted no shift at all. These results on contour alignment agree well with earlier ones from a more explicit feature-marking task, and strongly suggest that human vision does not use local energy peaks to locate basic first-order features. [Supported by the Wellcome Trust (ref: 056093)]
Resumo:
Relocation, an intraorganizational geographical transfer, can be used for human resource development (HRD) because of the positive developmental effects it can induce. It is, thus, important for HRD professionals to understand the implications of relocation to ensure it is used appropriately and effectively as an HRD technique. Research on relocation is abundant but presently lacks integration. This article introduces the Four-Factor Taxonomy of Relocation Outcomes, which summarizes, organizes, and guides research in this area. The taxonomy provides researchers with four dimensions along which to consistently classify relocation outcomes: valence (positive vs. negative), duration (length of effect), magnitude (strength of effect), and quality (type of effect). The article concludes with a discussion of implications for HRD practitioners and researchers.
Resumo:
We present a new form of contrast masking in which the target is a patch of low spatial frequency grating (0.46 c/deg) and the mask is a dark thin ring that surrounds the centre of the target patch. In matching and detection experiments we found little or no effect for binocular presentation of mask and test stimuli. But when mask and test were presented briefly (33 or 200 ms) to different eyes (dichoptic presentation), masking was substantial. In a 'half-binocular' condition the test stimulus was presented to one eye, but the mask stimulus was presented to both eyes with zero-disparity. This produced masking effects intermediate to those found in dichoptic and full-binocular conditions. We suggest that interocular feature matching can attenuate the potency of interocular suppression, but unlike in previous work (McKee, S. P., Bravo, M. J., Taylor, D. G., & Legge, G. E. (1994) Stereo matching precedes dichoptic masking. Vision Research, 34, 1047) we do not invoke a special role for depth perception. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
A novel biosensing system based on a micromachined rectangular silicon membrane is proposed and investigated in this paper. A distributive sensing scheme is designed to monitor the dynamics of the sensing structure. An artificial neural network is used to process the measured data and to identify cell presence and density. Without specifying any particular bio-application, the investigation is mainly concentrated on the performance testing of this kind of biosensor as a general biosensing platform. The biosensing experiments on the microfabricated membranes involve seeding different cell densities onto the sensing surface of membrane, and measuring the corresponding dynamics information of each tested silicon membrane in the form of a series of frequency response functions (FRFs). All of those experiments are carried out in cell culture medium to simulate a practical working environment. The EA.hy 926 endothelial cell lines are chosen in this paper for the bio-experiments. The EA.hy 926 endothelial cell lines represent a particular class of biological particles that have irregular shapes, non-uniform density and uncertain growth behaviour, which are difficult to monitor using the traditional biosensors. The final predicted results reveal that the methodology of a neural-network based algorithm to perform the feature identification of cells from distributive sensory measurement has great potential in biosensing applications.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
Influential models of edge detection have generally supposed that an edge is detected at peaks in the 1st derivative of the luminance profile, or at zero-crossings in the 2nd derivative. However, when presented with blurred triangle-wave images, observers consistently marked edges not at these locations, but at peaks in the 3rd derivative. This new phenomenon, termed ‘Mach edges’ persisted when a luminance ramp was added to the blurred triangle-wave. Modelling of these Mach edge detection data required the addition of a physiologically plausible filter, prior to the 3rd derivative computation. A viable alternative model was examined, on the basis of data obtained with short-duration, high spatial-frequency stimuli. Detection and feature-making methods were used to examine the perception of Mach bands in an image set that spanned a range of Mach band detectabilities. A scale-space model that computed edge and bar features in parallel provided a better fit to the data than 4 competing models that combined information across scale in a different manner, or computed edge or bar features at a single scale. The perception of luminance bars was examined in 2 experiments. Data for one image-set suggested a simple rule for perception of a small Gaussian bar on a larger inverted Gaussian bar background. In previous research, discriminability (d’) has typically been reported to be a power function of contrast, where the exponent (p) is 2 to 3. However, using bar, grating, and Gaussian edge stimuli, with several methodologies, values of p were obtained that ranged from 1 to 1.7 across 6 experiments. This novel finding was explained by appealing to low stimulus uncertainty, or a near-linear transducer.
Resumo:
Nearest feature line-based subspace analysis is first proposed in this paper. Compared with conventional methods, the newly proposed one brings better generalization performance and incremental analysis. The projection point and feature line distance are expressed as a function of a subspace, which is obtained by minimizing the mean square feature line distance. Moreover, by adopting stochastic approximation rule to minimize the objective function in a gradient manner, the new method can be performed in an incremental mode, which makes it working well upon future data. Experimental results on the FERET face database and the UCI satellite image database demonstrate the effectiveness.
Resumo:
Web APIs have gained increasing popularity in recent Web service technology development owing to its simplicity of technology stack and the proliferation of mashups. However, efficiently discovering Web APIs and the relevant documentations on the Web is still a challenging task even with the best resources available on the Web. In this paper we cast the problem of detecting the Web API documentations as a text classification problem of classifying a given Web page as Web API associated or not. We propose a supervised generative topic model called feature latent Dirichlet allocation (feaLDA) which offers a generic probabilistic framework for automatic detection of Web APIs. feaLDA not only captures the correspondence between data and the associated class labels, but also provides a mechanism for incorporating side information such as labelled features automatically learned from data that can effectively help improving classification performance. Extensive experiments on our Web APIs documentation dataset shows that the feaLDA model outperforms three strong supervised baselines including naive Bayes, support vector machines, and the maximum entropy model, by over 3% in classification accuracy. In addition, feaLDA also gives superior performance when compared against other existing supervised topic models.
Resumo:
In this poster we presented our preliminary work on the study of spammer detection and analysis with 50 active honeypot profiles implemented on Weibo.com and QQ.com microblogging networks. We picked out spammers from legitimate users by manually checking every captured user's microblogs content. We built a spammer dataset for each social network community using these spammer accounts and a legitimate user dataset as well. We analyzed several features of the two user classes and made a comparison on these features, which were found to be useful to distinguish spammers from legitimate users. The followings are several initial observations from our analysis on the features of spammers captured on Weibo.com and QQ.com. ¦The following/follower ratio of spammers is usually higher than legitimate users. They tend to follow a large amount of users in order to gain popularity but always have relatively few followers. ¦There exists a big gap between the average numbers of microblogs posted per day from these two classes. On Weibo.com, spammers post quite a lot microblogs every day, which is much more than legitimate users do; while on QQ.com spammers post far less microblogs than legitimate users. This is mainly due to the different strategies taken by spammers on these two platforms. ¦More spammers choose a cautious spam posting pattern. They mix spam microblogs with ordinary ones so that they can avoid the anti-spam mechanisms taken by the service providers. ¦Aggressive spammers are more likely to be detected so they tend to have a shorter life while cautious spammers can live much longer and have a deeper influence on the network. The latter kind of spammers may become the trend of social network spammer. © 2012 IEEE.