850 resultados para Topology-based methods
Resumo:
This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Resumo:
Bacterially mediated iron redox cycling exerts a strong influence on groundwater geochemistry, but few studies have investigated iron biogeochemical processes in coastal alluvial aquifers from a microbiological viewpoint. The shallow alluvial aquifer located adjacent to Poona estuary on the subtropical Southeast Queensland coast represents a redox-stratified system where iron biogeochemical cycling potentially affects water quality. Using a 300 m transect of monitoring wells perpendicular to the estuary, we examined groundwater physico-chemical conditions and the occurrence of cultivable bacterial populations involved in iron (and manganese, sulfur) redox reactions in this aquifer. Results showed slightly acidic and near-neutral pH, suboxic conditions and an abundance of dissolved iron consisting primarily of iron(II) in the majority of wells. The highest level of dissolved iron(III) was found in a well proximal to the estuary most likely a result of iron curtain effects due to tidal intrusion. A number of cultivable, (an)aerobic bacterial populations capable of diverse carbon, iron, or sulfur metabolism coexisted in groundwater redox transition zones. Our findings indicated aerobic, heterotrophic respiration and bacterially mediated iron/sulfur redox reactions were integral to carbon cycling in the aquifer. High abundances of dissolved iron and cultivable iron and sulfur bacterial populations in estuary-adjacent aquifers have implications for iron transport to marine waters. This study demonstrated bacterially mediated iron redox cycling and associated biogeochemical processes in subtropical coastal groundwaters using culture-based methods.
Resumo:
The inquiries to return predictability are traditionally limited to conditional mean, while literature on portfolio selection is replete with moment-based analysis with up to the fourth moment being considered. This paper develops a distribution-based framework for both return prediction and portfolio selection. More specifically, a time-varying return distribution is modeled through quantile regressions and copulas, using quantile regressions to extract information in marginal distributions and copulas to capture dependence structure. A preference function which captures higher moments is proposed for portfolio selection. An empirical application highlights the additional information provided by the distributional approach which cannot be captured by the traditional moment-based methods.
Resumo:
The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...
Resumo:
The performance of techniques for evaluating multivariate volatility forecasts are not yet as well understood as their univariate counterparts. This paper aims to evaluate the efficacy of a range of traditional statistical-based methods for multivariate forecast evaluation together with methods based on underlying considerations of economic theory. It is found that a statistical-based method based on likelihood theory and an economic loss function based on portfolio variance are the most effective means of identifying optimal forecasts of conditional covariance matrices.
Resumo:
Bananas are one of the world�fs most important crops, serving as a staple food and an important source of income for millions of people in the subtropics. Pests and diseases are a major constraint to banana production. To prevent the spread of pests and disease, farmers are encouraged to use disease�] and insect�]free planting material obtained by micropropagation. This option, however, does not always exclude viruses and concern remains on the quality of planting material. Therefore, there is a demand for effective and reliable virus indexing procedures for tissue culture (TC) material. Reliable diagnostic tests are currently available for all of the economically important viruses of bananas with the exception of Banana streak viruses (BSV, Caulimoviridae, Badnavirus). Development of a reliable diagnostic test for BSV is complicated by the significant serological and genetic variation reported for BSV isolates, and the presence of endogenous BSV (eBSV). Current PCR�] and serological�]based diagnostic methods for BSV may not detect all species of BSV, and PCR�]based methods may give false positives because of the presence of eBSV. Rolling circle amplification (RCA) has been reported as a technique to detect BSV which can also discriminate between episomal and endogenous BSV sequences. However, the method is too expensive for large scale screening of samples in developing countries, and little information is available regarding its sensitivity. Therefore the development of reliable PCR�]based assays is still considered the most appropriate option for large scale screening of banana plants for BSV. This MSc project aimed to refine and optimise the protocols for BSV detection, with a particular focus on developing reliable PCR�]based diagnostics Initially, the appropriateness and reliability of PCR and RCA as diagnostic tests for BSV detection were assessed by testing 45 field samples of banana collected from nine districts in the Eastern region of Uganda in February 2010. This research was also aimed at investigating the diversity of BSV in eastern Uganda, identifying the BSV species present and characterising any new BSV species. Out of the 45 samples tested, 38 and 40 samples were considered positive by PCR and RCA, respectively. Six different species of BSV, namely Banana streak IM virus (BSIMV), Banana streak MY virus (BSMYV), Banana streak OL virus (BSOLV), Banana streak UA virus (BSUAV), Banana streak UL virus (BSULV), Banana streak UM virus (BSUMV), were detected by PCR and confirmed by RCA and sequencing. No new species were detected, but this was the first report of BSMYV in Uganda. Although RCA was demonstrated to be suitable for broad�]range detection of BSV, it proved time�]consuming and laborious for identification in field samples. Due to the disadvantages associated with RCA, attempts were made to develop a reliable PCR�]based assay for the specific detection of episomal BSOLV, Banana streak GF virus (BSGFV), BSMYV and BSIMV. For BSOLV and BSGFV, the integrated sequences exist in rearranged, repeated and partially inverted portions at their site of integration. Therefore, for these two viruses, primers sets were designed by mapping previously published sequences of their endogenous counterparts onto published sequences of the episomal genomes. For BSOLV, two primer sets were designed while, for BSGFV, a single primer set was designed. The episomalspecificity of these primer sets was assessed by testing 106 plant samples collected during surveys in Kenya and Uganda, and 33 leaf samples from a wide range of banana cultivars maintained in TC at the Maroochy Research Station of the Department of Employment, Economic Development and Innovation (DEEDI), Queensland. All of these samples had previously been tested for episomal BSV by RCA and for both BSOLV and BSGFV by PCR using published primer sets. The outcome from these analyses was that the newly designed primer sets for BSOLV and BSGFV were able to distinguish between episomal BSV and eBSV in most cultivars with some B�]genome component. In some samples, however, amplification was observed using the putative episomal�]specific primer sets where episomal BSV was not identified using RCA. This may reflect a difference in the sensitivity of PCR compared to RCA, or possibly the presence of an eBSV sequence of different conformation. Since the sequences of the respective eBSV for BSMYV and BSIMV in the M. balbisiana genome are not available, a series of random primer combinations were tested in an attempt to find potential episomal�]specific primer sets for BSMYV and BSIMV. Of an initial 20 primer combinations screened for BSMYV detection on a small number of control samples, 11 primers sets appeared to be episomal�]specific. However, subsequent testing of two of these primer combinations on a larger number of control samples resulted in some inconsistent results which will require further investigation. Testing of the 25 primer combinations for episomal�]specific detection of BSIMV on a number of control samples showed that none were able to discriminate between episomal and endogenous BSIMV. The final component of this research project was the development of an infectious clone of a BSV endemic in Australia, namely BSMYV. This was considered important to enable the generation of large amounts of diseased plant material needed for further research. A terminally redundant fragment (.1.3 �~ BSMYV genome) was cloned and transformed into Agrobacterium tumefaciens strain AGL1, and used to inoculate 12 healthy banana plants of the cultivars Cavendish (Williams) by three different methods. At 12 weeks post�]inoculation, (i) four of the five banana plants inoculated by corm injection showed characteristic BSV symptoms while the remaining plant was wilting/dying, (ii) three of the five banana plants inoculated by needle�]pricking of the stem showed BSV symptoms, one plant was symptomless while the remaining had died and (iii) both banana plants inoculated by leaf infiltration were symptomless. When banana leaf samples were tested for BSMYV by PCR and RCA, BSMYV was confirmed in all banana plants showing symptoms including those were wilting and/or dying. The results from this research have provided several avenues for further research. By completely sequencing all variants of eBSOLV and eBSGFV and fully sequencing the eBSIMV and eBSMYV regions, episomal BSV�]specific primer sets for all eBSVs could potentially be designed that could avoid all integrants of that particular BSV species. Furthermore, the development of an infectious BSV clone will enable large numbers of BSVinfected plants to be generated for the further testing of the sensitivity of RCA compared to other more established assays such as PCR. The development of infectious clones also opens the possibility for virus induced gene silencing studies in banana.
Resumo:
Retrieving information from Twitter is always challenging due to its large volume, inconsistent writing and noise. Most existing information retrieval (IR) and text mining methods focus on term-based approach, but suffers from the problems of terms variation such as polysemy and synonymy. This problem deteriorates when such methods are applied on Twitter due to the length limit. Over the years, people have held the hypothesis that pattern-based methods should perform better than term-based methods as it provides more context, but limited studies have been conducted to support such hypothesis especially in Twitter. This paper presents an innovative framework to address the issue of performing IR in microblog. The proposed framework discover patterns in tweets as higher level feature to assign weight for low-level features (i.e. terms) based on their distributions in higher level features. We present the experiment results based on TREC11 microblog dataset and shows that our proposed approach significantly outperforms term-based methods Okapi BM25, TF-IDF and pattern based methods, using precision, recall and F measures.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.
Resumo:
Environmental Burkholderia pseudomallei isolated from sandy soil at Castle Hill, Townsville, in the dry tropic region of Queensland, Australia, was inoculated into sterile-soil laboratory microcosms subjected to variable soil moisture. Survival and sublethal injury of the B. pseudomallei strain were monitored by recovery using culture-based methods. Soil extraction buffer yielded higher recoveries as an extraction agent than sterile distilled water. B. pseudomallei was not recoverable when inoculated into desiccated soil but remained recoverable from moist soil subjected to 91 days desiccation and showed a growth response to increased soil moisture over at least 113 days. Results indicate that endemic dry tropic soil may act as a reservoir during the dry season, with an increase in cell number and potential for mobilization from soil into water in the wet season.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.
Resumo:
Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.
Resumo:
This thesis investigated the viability of using Frequency Response Functions in combination with Artificial Neural Network technique in damage assessment of building structures. The proposed approach can help overcome some of limitations associated with previously developed vibration based methods and assist in delivering more accurate and robust damage identification results. Excellent results are obtained for damage identification of the case studies proving that the proposed approach has been developed successfully.
Resumo:
In this paper, we explore the effectiveness of patch-based gradient feature extraction methods when applied to appearance-based gait recognition. Extending existing popular feature extraction methods such as HOG and LDP, we propose a novel technique which we term the Histogram of Weighted Local Directions (HWLD). These 3 methods are applied to gait recognition using the GEI feature, with classification performed using SRC. Evaluations on the CASIA and OULP datasets show significant improvements using these patch-based methods over existing implementations, with the proposed method achieving the highest recognition rate for the respective datasets. In addition, the HWLD can easily be extended to 3D, which we demonstrate using the GEV feature on the DGD dataset, observing improvements in performance.
Resumo:
There are several methods for determining the proteoglycan content of cartilage in biomechanics experiments. Many of these include assay-based methods and the histochemistry or spectrophotometry protocol where quantification is biochemically determined. More recently a method based on extracting data to quantify proteoglycan content has emerged using the image processing algorithms, e.g., in ImageJ, to process histological micrographs, with advantages including time saving and low cost. However, it is unknown whether or not this image analysis method produces results that are comparable to those obtained from the biochemical methodology. This paper compares the results of a well-established chemical method to those obtained using image analysis to determine the proteoglycan content of visually normal (n=33) and their progressively degraded counterparts with the protocols. The results reveal a strong linear relationship with a regression coefficient (R2) = 0.9928, leading to the conclusion that the image analysis methodology is a viable alternative to the spectrophotometry.