985 resultados para face classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Within the field of Information Systems, a good proportion of research is concerned with the work organisation and this has, to some extent, restricted the kind of application areas given consideration. Yet, it is clear that information and communication technology deployments beyond the work organisation are acquiring increased importance in our lives. With this in mind, we offer a field study of the appropriation of an online play space known as Habbo Hotel. Habbo Hotel, as a site of media convergence, incorporates social networking and digital gaming functionality. Our research highlights the ethical problems such a dual classification of technology may bring. We focus upon a particular set of activities undertaken within and facilitated by the space – scamming. Scammers dupe members with respect to their ‘Furni’, virtual objects that have online and offline economic value. Through our analysis we show that sometimes, online activities are bracketed off from those defined as offline and that this can be related to how the technology is classified by members – as a social networking site and/or a digital game. In turn, this may affect members’ beliefs about rights and wrongs. We conclude that given increasing media convergence, the way forward is to continue the project of educating people regarding the difficulties of determining rights and wrongs, and how rights and wrongs may be acted out with respect to new technologies of play online and offline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The noble idea of studying seminal works to ‘see what we can learn’ has turned in the 1990s into ‘let’s see what we can take’ and in the last decade a more toxic derivative ‘what else can’t we take’. That is my observation as a student of architecture in the 1990s, and as a practitioner in the 2000s. In 2010, the sense that something is ending is clear. The next generation is rising and their gaze has shifted. The idea of classification (as a means of separation) was previously rejected by a generation of Postmodernists; the usefulness of difference declined. It’s there in the presence of plurality in the resulting architecture, a decision to mine history and seize in a willful manner. This is a process of looking back but never forward. It has been a mono-culture of absorption. The mono-culture rejected the pursuit of the realistic. It is a blanket suffocating all practice of architecture in this country from the mercantile to the intellectual. Independent reviews of Australia’s recent contributions to the Venice Architecture Biennales confirm the malaise. The next generation is beginning to reconsider classification as a means of unification. By acknowledging the characteristics of competing forces it is possible to bring them into a state of tension. Seeking a beautiful contrast is a means to a new end. In the political setting, this is described by Noel Pearson as the radical centre[1]. The concept transcends the political and in its most essential form is a cultural phenomenon. It resists the compromised position and suggests that we can look back while looking forward. The radical centre is the only demonstrated opportunity where it is possible to pursue a realistic architecture. A realistic architecture in Australia may be partially resolved by addressing our anxiety of permanence. Farrelly’s built desires[2] and Markham’s ritual demonstrations[3] are two ways into understanding the broader spectrum of permanence. But I think they are downstream of our core problem. Our problem, as architects, is that we are yet to come to terms with this place. Some call it landscape others call it country. Australian cities were laid out on what was mistaken for a blank canvas. On some occasions there was the consideration of the landscape when it presented insurmountable physical obstacles. The architecture since has continued to work on its piece of a constantly blank canvas. Even more ironic is the commercial awards programs that represent a claim within this framework but at best can only establish a dialogue within itself. This is a closed system unable to look forward. It is said that Melbourne is the most European city in the southern hemisphere but what is really being described there is the limitation of a senseless grid. After all, if Dutch landscape informs Dutch architecture why can’t the Australian landscape inform Australian architecture? To do that, we would have to acknowledge our moribund grasp of the meaning of the Australian landscape. Or more precisely what Indigenes call Country[4]. This is a complex notion and there are different ways into it. Country is experienced and understood through the senses and seared into memory. If one begins design at that starting point it is not unreasonable to think we can arrive at an end point that is a counter trajectory to where we have taken ourselves. A recent studio with Masters students confirmed this. Start by finding Country and it would be impossible to end up with a building looking like an Aboriginal man’s face. To date architecture in Australia has overwhelmingly ignored Country on the back of terra nullius. It can’t seem to get past the picturesque. Why is it so hard? The art world came to terms with this challenge, so too did the legal establishment, even the political scene headed into new waters. It would be easy to blame the budgets of commerce or the constraints of program or even the pressure of success. But that is too easy. Those factors are in fact the kind of limitations that opportunities grow out of. The past decade of economic plenty has, for the most part, smothered the idea that our capitals might enable civic settings or an architecture that is able to looks past lot line boundaries in a dignified manner. The denied opportunities of these settings to be prompted by the Country they occupy is criminal. The public realm is arrested in its development because we refuse to accept Country as a spatial condition. What we seem to be able to embrace is literal and symbolic gestures usually taking the form of a trumped up art installations. All talk – no action. To continue to leave the public realm to the stewardship of mercantile interests is like embracing derivative lending after the global financial crisis.Herein rests an argument for why we need a resourced Government Architect’s office operating not as an isolated lobbyist for business but as a steward of the public realm for both the past and the future. New South Wales is the leading model with Queensland close behind. That is not to say both do not have flaws but current calls for their cessation on the grounds of design parity poorly mask commercial self interest. In Queensland, lobbyists are heavily regulated now with an aim to ensure integrity and accountability. In essence, what I am speaking of will not be found in Reconciliation Action Plans that double as business plans, or the mining of Aboriginal culture for the next marketing gimmick, or even discussions around how to make buildings more ‘Aboriginal’. It will come from the next generation who reject the noxious mono-culture of absorption and embrace a counter trajectory to pursue an architecture of realism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determination of sequence similarity is a central issue in computational biology, a problem addressed primarily through BLAST, an alignment based heuristic which has underpinned much of the analysis and annotation of the genomic era. Despite their success, alignment-based approaches scale poorly with increasing data set size, and are not robust under structural sequence rearrangements. Successive waves of innovation in sequencing technologies – so-called Next Generation Sequencing (NGS) approaches – have led to an explosion in data availability, challenging existing methods and motivating novel approaches to sequence representation and similarity scoring, including adaptation of existing methods from other domains such as information retrieval. In this work, we investigate locality-sensitive hashing of sequences through binary document signatures, applying the method to a bacterial protein classification task. Here, the goal is to predict the gene family to which a given query protein belongs. Experiments carried out on a pair of small but biologically realistic datasets (the full protein repertoires of families of Chlamydia and Staphylococcus aureus genomes respectively) show that a measure of similarity obtained by locality sensitive hashing gives highly accurate results while offering a number of avenues which will lead to substantial performance improvements over BLAST..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates face recognition in video under the presence of large pose variations. It proposes a solution that performs simultaneous detection of facial landmarks and head poses across large pose variations, employs discriminative modelling of feature distributions of faces with varying poses, and applies fusion of multiple classifiers to pose-mismatch recognition. Experiments on several benchmark datasets have demonstrated that improved performance is achieved using the proposed solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fine-grained leaf classification has concentrated on the use of traditional shape and statistical features to classify ideal images. In this paper we evaluate the effectiveness of traditional hand-crafted features and propose the use of deep convolutional neural network (ConvNet) features. We introduce a range of condition variations to explore the robustness of these features, including: translation, scaling, rotation, shading and occlusion. Evaluations on the Flavia dataset demonstrate that in ideal imaging conditions, combining traditional and ConvNet features yields state-of-theart performance with an average accuracy of 97:3%�0:6% compared to traditional features which obtain an average accuracy of 91:2%�1:6%. Further experiments show that this combined classification approach consistently outperforms the best set of traditional features by an average of 5:7% for all of the evaluated condition variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. HRV analysis is an important tool to observe the heart’s ability to respond to normal regulatory impulses that affect its rhythm. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. A computer-based arrhythmia detection system of cardiac states is very useful in diagnostics and disease management. In this work, we studied the identification of the HRV signals using features derived from HOS. These features were fed to the support vector machine (SVM) for classification. Our proposed system can classify the normal and other four classes of arrhythmia with an average accuracy of more than 85%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably classify scene components in the presence of marked and often sudden changes in lighting. This is the focus of this paper. Posed with the task of classifying all parts in a scene from a full colour image, we propose that lighting invariant transforms can reduce the variability of the scene, resulting in a more reliable classification. We leverage the ideas of “data transfer” for classification, beginning with full colour images for obtaining candidate scene-level matches using global image descriptors. This is commonly followed by superpixellevel matching with local features. However, we show that if the RGB images are subjected to an illuminant invariant transform before computing the superpixel-level features, classification is significantly more robust to scene illumination effects. The approach is evaluated using three datasets. The first being our own dataset and the second being the KITTI dataset using manually generated ground truth for quantitative analysis. We qualitatively evaluate the method on a third custom dataset over a 750m trajectory.