919 resultados para Image Classification
Designing for engagement towards healthier lifestyles through food image sharing : the case of I8DAT
Resumo:
This paper introduces the underlying design concepts of I8DAT, a food image sharing application that has been developed as part of a three-year research project – Eat, Cook, Grow: Ubiquitous Technology for Sustainable Food Culture in the City (http://www.urbaninformatics .net/projects/food) – exploring urban food practices to engage people in healthier, more environmentally and socially sustainable eating, cooking, and growing food in their everyday lives. The key aim of the project is to produce actionable knowledge, which is then applied to create and test several accessible, user-centred interactive design solutions that motivate user-engagement through playful and social means rather than authoritative information distribution. Through the design and implementation processes we envisage to integrate these design interventions to create a sustainable food network that is both technical and socio-cultural in nature (technosocial). Our primary research locale is Brisbane, Australia, with additional work carried out in three reference cities with divergent geographic, socio-cultural, and technological backgrounds: Seoul, South Korea, for its global leadership in ubiquitous technology, broadband access, and high population density; Lincoln, UK, for the regional and peri-urban dimension it provides, and Portland, Oregon, US, for its international standing as a hub of the sustainable food movement.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
The purpose of this conceptual paper is to address the lack of consistent means through which strategies are identified and discussed across theoretical perspectives in the field of business strategy. A standardised referencing system is offered to codify the means by which strategies can be identified, from which new business services and information systems may be derived. This taxonomy was developed using qualitative content analysis study of government agencies’ strategic plans. This taxonomy is useful for identifying strategy formation and determining gaps and opportunities. Managers will benefit from a more transparent strategic design process that reduces ambiguity, aids in identifying and correcting gaps in strategy formulation, and fosters enhanced strategic analysis. Key benefits to academics are the improved dialogue in strategic management field and suggest that progress in the field requires that fundamentals of strategy formulation and classification be considered more carefully. Finally, the formalization of strategy can lead to the clear identification of new business services, which inform ICT investment decisions and shared service prioritisation.
Resumo:
We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.
Resumo:
Background: Strategies for cancer reduction and management are targeted at both individual and area levels. Area-level strategies require careful understanding of geographic differences in cancer incidence, in particular the association with factors such as socioeconomic status, ethnicity and accessibility. This study aimed to identify the complex interplay of area-level factors associated with high area-specific incidence of Australian priority cancers using a classification and regression tree (CART) approach. Methods: Area-specific smoothed standardised incidence ratios were estimated for priority-area cancers across 478 statistical local areas in Queensland, Australia (1998-2007, n=186,075). For those cancers with significant spatial variation, CART models were used to identify whether area-level accessibility, socioeconomic status and ethnicity were associated with high area-specific incidence. Results: The accessibility of a person’s residence had the most consistent association with the risk of cancer diagnosis across the specific cancers. Many cancers were likely to have high incidence in more urban areas, although male lung cancer and cervical cancer tended to have high incidence in more remote areas. The impact of socioeconomic status and ethnicity on these associations differed by type of cancer. Conclusions: These results highlight the complex interactions between accessibility, socioeconomic status and ethnicity in determining cancer incidence risk.
Resumo:
Facial expression recognition (FER) algorithms mainly focus on classification into a small discrete set of emotions or representation of emotions using facial action units (AUs). Dimensional representation of emotions as continuous values in an arousal-valence space is relatively less investigated. It is not fully known whether fusion of geometric and texture features will result in better dimensional representation of spontaneous emotions. Moreover, the performance of many previously proposed approaches to dimensional representation has not been evaluated thoroughly on publicly available databases. To address these limitations, this paper presents an evaluation framework for dimensional representation of spontaneous facial expressions using texture and geometric features. SIFT, Gabor and LBP features are extracted around facial fiducial points and fused with FAP distance features. The CFS algorithm is adopted for discriminative texture feature selection. Experimental results evaluated on the publicly accessible NVIE database demonstrate that fusion of texture and geometry does not lead to a much better performance than using texture alone, but does result in a significant performance improvement over geometry alone. LBP features perform the best when fused with geometric features. Distributions of arousal and valence for different emotions obtained via the feature extraction process are compared with those obtained from subjective ground truth values assigned by viewers. Predicted valence is found to have a more similar distribution to ground truth than arousal in terms of covariance or Bhattacharya distance, but it shows a greater distance between the means.
Resumo:
Purpose: This study provides a simple method for improving precision of x-ray computed tomography (CT) scans of irradiated polymer gel dosimetry. The noise affecting CT scans of irradiated gels has been an impediment to the use of clinical CT scanners for gel dosimetry studies. Method: In this study, it is shown that multiple scans of a single PAGAT gel dosimeter can be used to extrapolate a ‘zero-scan’ image which displays a similar level of precision to an image obtained by averaging multiple CT images, without the compromised dose measurement resulting from the exposure of the gel to radiation from the CT scanner. Results: When extrapolating the zero-scan image, it is shown that exponential and simple linear fits to the relationship between Hounsfield unit and scan number, for each pixel in the image, provides an accurate indication of gel density. Conclusions: It is expected that this work will be utilised in the analysis of three-dimensional gel volumes irradiated using complex radiotherapy treatments.
Resumo:
Follicle classification is an important aid to the understanding of follicular development and atresia. Some bovine primordial follicles have the classical primordial shape, but ellipsoidal shaped follicles with some cuboidal granulosa cells at the poles are far more common. Preantral follicles have one of two basal lamina phenotypes, either a single aligned layer or one with additional layers. In antral follicles <5 mm diameter, half of the healthy follicles have columnar shaped basal granulosa cells and additional layers of basal lamina, which appear as loops in cross section (‘loopy’). The remainder have aligned single-layered follicular basal laminas with rounded basal cells, and contain better quality oocytes than the loopy/columnar follicles. In sizes >5 mm, only aligned/rounded phenotypes are present. Dominant and subordinate follicles can be identified by ultrasound and/or histological examination of pairs of ovaries. Atretic follicles <5 mm are either basal atretic or antral atretic, named on the basis of the location in the membrana granulosa where cells die first. Basal atretic follicles have considerable biological differences to antral atretic follicles. In follicles >5 mm, only antral atresia is observed. The concentrations of follicular fluid steroid hormones can be used to classify atresia and distinguish some of the different types of atresia; however, this method is unlikely to identify follicles early in atresia, and hence misclassify them as healthy. Other biochemical and histological methods can be used, but since cell death is a part of normal homoeostatis, deciding when a follicle has entered atresia remains somewhat subjective.
Resumo:
Background The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.
Resumo:
A new approach to recognition of images using invariant features based on higher-order spectra is presented. Higher-order spectra are translation invariant because translation produces linear phase shifts which cancel. Scale and amplification invariance are satisfied by the phase of the integral of a higher-order spectrum along a radial line in higher-order frequency space because the contour of integration maps onto itself and both the real and imaginary parts are affected equally by the transformation. Rotation invariance is introduced by deriving invariants from the Radon transform of the image and using the cyclic-shift invariance property of the discrete Fourier transform magnitude. Results on synthetic and actual images show isolated, compact clusters in feature space and high classification accuracies
Resumo:
An application of image processing techniques to recognition of hand-drawn circuit diagrams is presented. The scanned image of a diagram is pre-processed to remove noise and converted to bilevel. Morphological operations are applied to obtain a clean, connected representation using thinned lines. The diagram comprises of nodes, connections and components. Nodes and components are segmented using appropriate thresholds on a spatially varying object pixel density. Connection paths are traced using a pixel-stack. Nodes are classified using syntactic analysis. Components are classified using a combination of invariant moments, scalar pixel-distribution features, and vector relationships between straight lines in polygonal representations. A node recognition accuracy of 82% and a component recognition accuracy of 86% was achieved on a database comprising 107 nodes and 449 components. This recogniser can be used for layout “beautification” or to generate input code for circuit analysis and simulation packages