235 resultados para chromatic feature extraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mandatory reporting is a key aspect of Australia’s approach to protecting children and is incorporated into all jurisdictions’ legislation, albeit in a variety of forms. In this article we examine all major newspaper’s coverage of mandatory reporting during an 18-month period in 2008-2009, when high-profile tragedies and inquiries occurred and significant policy and reform agendas were being debated. Mass media utilise a variety of lenses to inform and shape public responses and attitudes to reported events. We use frame analysis to identify the ways in which stories were composed and presented, and how language portrayed this contested area of policy. The results indicate that within an overall portrayal of system failure and the need for reform, the coverage placed major responsibility on child protection agencies for the over-reporting, under-reporting, and overburdened system identified, along with the failure of mandatory reporting to reduce risk. The implications for ongoing reform are explored along with the need for robust research to inform debate about the merits of mandatory reporting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter takes as its central premise the human capacity to adapt to changing environments. It is an idea that is central to complexity theory but receives only modest attention in relation to learning. To do this we will draw from a range of fields and then consider some recent research in motor control that may extend the discussion in ways not yet considered, but that will build on advances already made within pedagogy and motor control synergies. Recent work in motor control indicates that humans have far greater capacity to adapt to the ‘product space’ than was previously thought, mainly through fast heuristics and on-line corrections. These are changes that can be made in real (movement) time and are facilitated by what are referred to as ‘feed-forward’ mechanisms that take advantage of ultra-fast ways of recognizing the likely outcomes of our movements and using this as a source of feedback. We conclude by discussing some possible ideas for pedagogy within the sport and physical activity domains, the implications of which would require a rethink on how motor skill learning opportunities might best be facilitated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mean shift tracker has achieved great success in visual object tracking due to its efficiency being nonparametric. However, it is still difficult for the tracker to handle scale changes of the object. In this paper, we associate a scale adaptive approach with the mean shift tracker. Firstly, the target in the current frame is located by the mean shift tracker. Then, a feature point matching procedure is employed to get the matched pairs of the feature point between target regions in the current frame and the previous frame. We employ FAST-9 corner detector and HOG descriptor for the feature matching. Finally, with the acquired matched pairs of the feature point, the affine transformation between target regions in the two frames is solved to obtain the current scale of the target. Experimental results show that the proposed tracker gives satisfying results when the scale of the target changes, with a good performance of efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a highly reliable fault diagnosis scheme for incipient low-speed rolling element bearing failures. The scheme consists of fault feature calculation, discriminative fault feature analysis, and fault classification. The proposed approach first computes wavelet-based fault features, including the respective relative wavelet packet node energy and entropy, by applying a wavelet packet transform to an incoming acoustic emission signal. The most discriminative fault features are then filtered from the originally produced feature vector by using discriminative fault feature analysis based on a binary bat algorithm (BBA). Finally, the proposed approach employs one-against-all multiclass support vector machines to identify multiple low-speed rolling element bearing defects. This study compares the proposed BBA-based dimensionality reduction scheme with four other dimensionality reduction methodologies in terms of classification performance. Experimental results show that the proposed methodology is superior to other dimensionality reduction approaches, yielding an average classification accuracy of 94.9%, 95.8%, and 98.4% under bearing rotational speeds at 20 revolutions-per-minute (RPM), 80 RPM, and 140 RPM, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erythropoietin (EPO), a glycoprotein hormone of ∼34 kDa, is an important hematopoietic growth factor, mainly produced in the kidney and controls the number of red blood cells circulating in the blood stream. Sensitive and rapid recombinant human EPO (rHuEPO) detection tools that improve on the current laborious EPO detection techniques are in high demand for both clinical and sports industry. A sensitive aptamer-functionalized biosensor (aptasensor) has been developed by controlled growth of gold nanostructures (AuNS) over a gold substrate (pAu/AuNS). The aptasensor selectively binds to rHuEPO and, therefore, was used to extract and detect the drug from horse plasma by surface enhanced Raman spectroscopy (SERS). Due to the nanogap separation between the nanostructures, the high population and distribution of hot spots on the pAu/AuNS substrate surface, strong signal enhancement was acquired. By using wide area illumination (WAI) setting for the Raman detection, a low RSD of 4.92% over 150 SERS measurements was achieved. The significant reproducibility of the new biosensor addresses the serious problem of SERS signal inconsistency that hampers the use of the technique in the field. The WAI setting is compatible with handheld Raman devices. Therefore, the new aptasensor can be used for the selective extraction of rHuEPO from biological fluids and subsequently screened with handheld Raman spectrometer for SERS based in-field protein detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The speed at which target pictures are named increases monotonically as a function of prior retrieval of other exemplars of the same semantic category and is unaffected by the number of intervening items. This cumulative semantic interference effect is generally attributed to three mechanisms: shared feature activation, priming and lexical-level selection. However, at least two additional mechanisms have been proposed: (1) a 'booster' to amplify lexical-level activation and (2) retrieval-induced forgetting (RIF). In a perfusion functional Magnetic Resonance Imaging (fMRI) experiment, we tested hypotheses concerning the involvement of all five mechanisms. Our results demonstrate that the cumulative interference effect is associated with perfusion signal changes in the left perirhinal and middle temporal cortices that increase monotonically according to the ordinal position of exemplars being named. The left inferior frontal gyrus (LIFG) also showed significant perfusion signal changes across ordinal presentations; however, these responses did not conform to a monotonically increasing function. None of the cerebral regions linked with RIF in prior neuroimaging and modelling studies showed significant effects. This might be due to methodological differences between the RIF paradigm and continuous naming as the latter does not involve practicing particular information. We interpret the results as indicating priming of shared features and lexical-level selection mechanisms contribute to the cumulative interference effect, while adding noise to a booster mechanism could account for the pattern of responses observed in the LIFG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How does the presence of a categorically related word influence picture naming latencies? In order to test competitive and noncompetitive accounts of lexical selection in spoken word production, we employed the picture–word interference (PWI) paradigm to investigate how conceptual feature overlap influences naming latencies when distractors are category coordinates of the target picture. Mahon et al. (2007. Lexical selection is not by competition: A reinterpretation of semantic interference and facilitation effects in the picture-word interference paradigm. Journal of Experimental Psychology. Learning, Memory, and Cognition, 33(3), 503–535. doi:10.1037/0278-7393.33.3.503) reported that semantically close distractors (e.g., zebra) facilitated target picture naming latencies (e.g., HORSE) compared to far distractors (e.g., whale). We failed to replicate a facilitation effect for within-category close versus far target–distractor pairings using near-identical materials based on feature production norms, instead obtaining reliably larger interference effects (Experiments 1 and 2). The interference effect did not show a monotonic increase across multiple levels of within-category semantic distance, although there was evidence of a linear trend when unrelated distractors were included in analyses (Experiment 2). Our results show that semantic interference in PWI is greater for semantically close than for far category coordinate relations, reflecting the extent of conceptual feature overlap between target and distractor. These findings are consistent with the assumptions of prominent competitive lexical selection models of speech production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required. The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an empirical evaluation and comparison of two content extraction methods in HTML: absolute XPath expressions and relative XPath expressions. We argue that the relative XPath expressions, although not widely used, should be used in preference to absolute XPath expressions in extracting content from human-created Web documents. Evaluation of robustness covers four thousand queries executed on several hundred webpages. We show that in referencing parts of real world dynamic HTML documents, relative XPath expressions are on average significantly more robust than absolute XPath ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Product reviews are the foremost source of information for customers and manufacturers to help them make appropriate purchasing and production decisions. Natural language data is typically very sparse; the most common words are those that do not carry a lot of semantic content, and occurrences of any particular content-bearing word are rare, while co-occurrences of these words are rarer. Mining product aspects, along with corresponding opinions, is essential for Aspect-Based Opinion Mining (ABOM) as a result of the e-commerce revolution. Therefore, the need for automatic mining of reviews has reached a peak. In this work, we deal with ABOM as sequence labelling problem and propose a supervised extraction method to identify product aspects and corresponding opinions. We use Conditional Random Fields (CRFs) to solve the extraction problem and propose a feature function to enhance accuracy. The proposed method is evaluated using two different datasets. We also evaluate the effectiveness of feature function and the optimisation through multiple experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As of today, user-generated information such as online reviews has become increasingly significant for customers in decision making process. Meanwhile, as the volume of online reviews proliferates, there is an insistent demand to help the users tackle the information overload problem. In order to extract useful information from overwhelming reviews, considerable work has been proposed such as review summarization and review selection. Particularly, to avoid the redundant information, researchers attempt to select a small set of reviews to represent the entire review corpus by preserving its statistical properties (e.g., opinion distribution). However, one significant drawback of the existing works is that they only measure the utility of the extracted reviews as a whole without considering the quality of each individual review. As a result, the set of chosen reviews may consist of low-quality ones even its statistical property is close to that of the original review corpus, which is not preferred by the users. In this paper, we proposed a review selection method which takes review quality into consideration during the selection process. Specifically, we examine the relationships between product features based upon a domain ontology to capture the review characteristics based on which to select reviews that have good quality and preserve the opinion distribution as well. Our experimental results based on real world review datasets demonstrate that our proposed approach is feasible and able to improve the performance of the review selection effectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The latest generation of Deep Convolutional Neural Networks (DCNN) have dramatically advanced challenging computer vision tasks, especially in object detection and object classification, achieving state-of-the-art performance in several computer vision tasks including text recognition, sign recognition, face recognition and scene understanding. The depth of these supervised networks has enabled learning deeper and hierarchical representation of features. In parallel, unsupervised deep learning such as Convolutional Deep Belief Network (CDBN) has also achieved state-of-the-art in many computer vision tasks. However, there is very limited research on jointly exploiting the strength of these two approaches. In this paper, we investigate the learning capability of both methods. We compare the output of individual layers and show that many learnt filters and outputs of the corresponding level layer are almost similar for both approaches. Stacking the DCNN on top of unsupervised layers or replacing layers in the DCNN with the corresponding learnt layers in the CDBN can improve the recognition/classification accuracy and training computational expense. We demonstrate the validity of the proposal on ImageNet dataset.