108 resultados para Fractal Descriptors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge of the amounts and types of fatty acids in groundnut oil is beneficial, particularly from a nutritional standpoint. Germplasm evaluation data for fatty acid composition on 819 accessions of groundnut (Arachis hypogaea L.) from the Australian Tropical Field Crops Genetic Resource Centre, Biloela, Queensland were examined. Data for eight quantitative fatty acid descriptors have been documented. Statistical assessment, via methods of pattern analysis, summarised and described the patterns of variation in fatty acid composition of the groundnut accessions in the Australian germplasm collection. Presentation of the results from principal components analysis and hierarchical cluster analysis using a biplot was shown to be a very useful interpretative tool. Such a biplot enables a simultaneous examination of the relationships among all the accessions and the fatty acids. Unlike that information available via database searches, the results from contribution analysis together with the biplot provide a global picture of the diversity available for use in plant breeding programs. The use of standardised data for eight fatty acids, compared to using three specific fatty acids, provided a better description of the total diversity available because it remains relevant with possible changes in the nutritional preferences for fatty acids. Fatty acid composition was found to vary in relation to the branching pattern of the accessions. This pattern is generally indicative of the botanical types of groundnuts; Virginia (alternate) compared to Spanish and Valencia (sequential) botanical types.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data in germplasm collections contain a mixture of data types; binary, multistate and quantitative. Given the multivariate nature of these data, the pattern analysis methods of classification and ordination have been identified as suitable techniques for statistically evaluating the available diversity. The proximity (or resemblance) measure, which is in part the basis of the complementary nature of classification and ordination techniques, is often specific to particular data types. The use of a combined resemblance matrix has an advantage over data type specific proximity measures. This measure accommodates the different data types without manipulating them to be of a specific type. Descriptors are partitioned into their data types and an appropriate proximity measure is used on each. The separate proximity matrices, after range standardisation, are added as a weighted average and the combined resemblance matrix is then used for classification and ordination. Germplasm evaluation data for 831 accessions of groundnut (Arachis hypogaea L.) from the Australian Tropical Field Crops Genetic Resource Centre, Biloela, Queensland were examined. Data for four binary, five ordered multistate and seven quantitative descriptors have been documented. The interpretative value of different weightings - equal and unequal weighting of data types to obtain a combined resemblance matrix - was investigated by using principal co-ordinate analysis (ordination) and hierarchical cluster analysis. Equal weighting of data types was found to be more valuable for these data as the results provided a greater insight into the patterns of variability available in the Australian groundnut germplasm collection. The complementary nature of pattern analysis techniques enables plant breeders to identify relevant accessions in relation to the descriptors which distinguish amongst them. This additional information may provide plant breeders with a more defined entry point into the germplasm collection for identifying sources of variability for their plant improvement program, thus improving the utilisation of germplasm resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Novel computer vision techniques have been developed to automatically detect unusual events in crowded scenes from video feeds of surveillance cameras. The research is useful in the design of the next generation intelligent video surveillance systems. Two major contributions are the construction of a novel machine learning model for multiple instance learning through compressive sensing, and the design of novel feature descriptors in the compressed video domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably classify scene components in the presence of marked and often sudden changes in lighting. This is the focus of this paper. Posed with the task of classifying all parts in a scene from a full colour image, we propose that lighting invariant transforms can reduce the variability of the scene, resulting in a more reliable classification. We leverage the ideas of “data transfer” for classification, beginning with full colour images for obtaining candidate scene-level matches using global image descriptors. This is commonly followed by superpixellevel matching with local features. However, we show that if the RGB images are subjected to an illuminant invariant transform before computing the superpixel-level features, classification is significantly more robust to scene illumination effects. The approach is evaluated using three datasets. The first being our own dataset and the second being the KITTI dataset using manually generated ground truth for quantitative analysis. We qualitatively evaluate the method on a third custom dataset over a 750m trajectory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Successful prediction of groundwater flow and solute transport through highly heterogeneous aquifers has remained elusive due to the limitations of methods to characterize hydraulic conductivity (K) and generate realistic stochastic fields from such data. As a result, many studies have suggested that the classical advective-dispersive equation (ADE) cannot reproduce such transport behavior. Here we demonstrate that when high-resolution K data are used with a fractal stochastic method that produces K fields with adequate connectivity, the classical ADE can accurately predict solute transport at the macrodispersion experiment site in Mississippi. This development provides great promise to accurately predict contaminant plume migration, design more effective remediation schemes, and reduce environmental risks. Key Points Non-Gaussian transport behavior at the MADE site is unraveledADE can reproduce tracer transport in heterogeneous aquifers with no calibrationNew fractal method generates heterogeneous K fields with adequate connectivity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Protein adsorption at solid-liquid interfaces is critical to many applications, including biomaterials, protein microarrays and lab-on-a-chip devices. Despite this general interest, and a large amount of research in the last half a century, protein adsorption cannot be predicted with an engineering level, design-orientated accuracy. Here we describe a Biomolecular Adsorption Database (BAD), freely available online, which archives the published protein adsorption data. Piecewise linear regression with breakpoint applied to the data in the BAD suggests that the input variables to protein adsorption, i.e., protein concentration in solution; protein descriptors derived from primary structure (number of residues, global protein hydrophobicity and range of amino acid hydrophobicity, isoelectric point); surface descriptors (contact angle); and fluid environment descriptors (pH, ionic strength), correlate well with the output variable-the protein concentration on the surface. Furthermore, neural network analysis revealed that the size of the BAD makes it sufficiently representative, with a neural network-based predictive error of 5% or less. Interestingly, a consistently better fit is obtained if the BAD is divided in two separate sub-sets representing protein adsorption on hydrophilic and hydrophobic surfaces, respectively. Based on these findings, selected entries from the BAD have been used to construct neural network-based estimation routines, which predict the amount of adsorbed protein, the thickness of the adsorbed layer and the surface tension of the protein-covered surface. While the BAD is of general interest, the prediction of the thickness and the surface tension of the protein-covered layers are of particular relevance to the design of microfluidics devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new metric, which we call the lighting variance ratio, for quantifying descriptors in terms of their variance to illumination changes. In many applications it is desirable to have descriptors that are robust to changes in illumination, especially in outdoor environments. The lighting variance ratio is useful for comparing descriptors and determining if a descriptor is lighting invariant enough for a given environment. The metric is analysed across a number of datasets, cameras and descriptors. The results show that the upright SIFT descriptor is typically the most lighting invariant descriptor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research draws on theories of emergence to inform the creation of an artistic and direct visualization. This is an interactive artwork and drawing tool for creative participant experiences. Emergence is characteristically creative and many different models of emergence exist. It is therefore possible to effect creativity through the application of emergence mechanisms from these different disciplines. A review of theories of emergence and examples of visualization in the arts, is provided. An art project led by the author is then discussed in this context. This project, Iterative Intersections, is a collaboration with community artists from Cerebral Palsy League. It has resulted in a number of creative outcomes including the interactive art application, Of me with me. Analytical discussion of this work shows how its construction draws on aspects of experience design, fractal and emergent theory to effect perceptual emergence and creative experience as well as to facilitate self-efficacy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The use of mobile apps for health and well being promotion has grown exponentially in recent years. Yet, there is currently no app-quality assessment tool beyond “star”-ratings. Objective The objective of this study was to develop a reliable, multidimensional measure for trialling, classifying, and rating the quality of mobile health apps. Methods A literature search was conducted to identify articles containing explicit Web or app quality rating criteria published between January 2000 and January 2013. Existing criteria for the assessment of app quality were categorized by an expert panel to develop the new Mobile App Rating Scale (MARS) subscales, items, descriptors, and anchors. There were sixty well being apps that were randomly selected using an iTunes search for MARS rating. There were ten that were used to pilot the rating procedure, and the remaining 50 provided data on interrater reliability. Results There were 372 explicit criteria for assessing Web or app quality that were extracted from 25 published papers, conference proceedings, and Internet resources. There were five broad categories of criteria that were identified including four objective quality scales: engagement, functionality, aesthetics, and information quality; and one subjective quality scale; which were refined into the 23-item MARS. The MARS demonstrated excellent internal consistency (alpha = .90) and interrater reliability intraclass correlation coefficient (ICC = .79). Conclusions The MARS is a simple, objective, and reliable tool for classifying and assessing the quality of mobile health apps. It can also be used to provide a checklist for the design and development of new high quality health apps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human connectome has recently become a popular research topic in neuroscience, and many new algorithms have been applied to analyze brain networks. In particular, network topology measures from graph theory have been adapted to analyze network efficiency and 'small-world' properties. While there has been a surge in the number of papers examining connectivity through graph theory, questions remain about its test-retest reliability (TRT). In particular, the reproducibility of structural connectivity measures has not been assessed. We examined the TRT of global connectivity measures generated from graph theory analyses of 17 young adults who underwent two high-angular resolution diffusion (HARDI) scans approximately 3 months apart. Of the measures assessed, modularity had the highest TRT, and it was stable across a range of sparsities (a thresholding parameter used to define which network edges are retained). These reliability measures underline the need to develop network descriptors that are robust to acquisition parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic recordings play an increasingly important role in monitoring terrestrial environments. However, due to rapid advances in technology, ecologists are accumulating more audio than they can listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings by calculating acoustic indices. These are statistics which describe the temporal-spectral distribution of acoustic energy and reflect content of ecological interest. We combine spectral indices to produce false-color spectrogram images. These not only reveal acoustic content but also facilitate navigation. An additional analytic challenge is to find appropriate descriptors to summarize the content of 24-hour recordings, so that it becomes possible to monitor long-term changes in the acoustic environment at a single location and to compare the acoustic environments of different locations. We describe a 24-hour ‘acoustic-fingerprint’ which shows some preliminary promise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Optometry students are taught the process of subjective refraction through lectures and laboratory based practicals before progressing to supervised clinical practice. Simulated learning environments (SLEs) are an emerging technology that are used in a range of health disciplines, however, there is limited evidence regarding the effectiveness of clinical simulators as an educational tool. Methods: Forty optometry students (20 fourth year and 20 fifth year) were assessed twice by a qualified optometrist (two examinations separated by 4-8 weeks) while completing a monocular non-cycloplegic subjective refraction on the same patient with an unknown refractive error simulated using contact lenses. Half of the students were granted access to an online SLE, The Brien Holden Vision Institute (BHVI®) Virtual Refractor, and the remaining students formed a control group. The primary outcome measures at each visit were; accuracy of the clinical refraction compared to a qualified optometrist and relative to the Optometry Council of Australia and New Zealand (OCANZ) subjective refraction examination criteria. Secondary measures of interest included descriptors of student SLE engagement, student self-reported confidence levels and correlations between performance in the simulated and real world clinical environment. Results: Eighty percent of students in the intervention group interacted with the SLE (for an average of 100 minutes); however, there was no correlation between measures of student engagement with the BHVI® Virtual Refractor and speed or accuracy of clinical subjective refractions. Fifth year students were typically more confident and refracted more accurately and quickly than fourth year students. A year group by experimental group interaction (p = 0.03) was observed for accuracy of the spherical component of refraction, and post hoc analysis revealed that less experienced students exhibited greater gains in clinical accuracy following exposure to the SLE intervention. Conclusions: Short-term exposure to a SLE can positively influence clinical subjective refraction outcomes for less experienced optometry students and may be of benefit in increasing the skills of novice refractionists to levels appropriate for commencing supervised clinical interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The purpose of this presentation is to outline the relevance of the categorization of the load regime data to assess the functional output and usage of the prosthesis of lower limb amputees. The objectives are • To highlight the need for categorisation of activities of daily living • To present a categorization of load regime applied on residuum, • To present some descriptors of the four types of activity that could be detected, • To provide an example the results for a case. Methods The load applied on the osseointegrated fixation of one transfemoral amputee was recorded using a portable kinetic system for 5 hours. The load applied on the residuum was divided in four types of activities corresponding to inactivity, stationary loading, localized locomotion and directional locomotion as detailed in previously publications. Results The periods of directional locomotion, localized locomotion, and stationary loading occurred 44%, 34%, and 22% of recording time and each accounted for 51%, 38%, and 12% of the duration of the periods of activity, respectively. The absolute maximum force during directional locomotion, localized locomotion, and stationary loading was 19%, 15%, and 8% of the body weight on the anteroposterior axis, 20%, 19%, and 12% on the mediolateral axis, and 121%, 106%, and 99% on the long axis. A total of 2,783 gait cycles were recorded. Discussion Approximately 10% more gait cycles and 50% more of the total impulse than conventional analyses were identified. The proposed categorization and apparatus have the potential to complement conventional instruments, particularly for difficult cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.