902 resultados para Graph-based methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Post-abortion contraceptive use in India is low and the use of modern methods of contraception is rare, especially in rural areas. This study primarily compares contraceptive use among women whose abortion outcome was assessed in-clinic with women who assessed their abortion outcome at home, in a low-resource, primary health care setting. Moreover, it investigates how background characteristics and abortion service provision influences contraceptive use post-abortion. METHODS: A randomized controlled, non-inferiority, trial (RCT) compared clinic follow-up with home-assessment of abortion outcome at 2 weeks post-abortion. Additionally, contraceptive-use at 3 months post-abortion was investigated through a cross-sectional follow-up interview with a largely urban sub-sample of women from the RCT. Women seeking abortion with a gestational age of up to 9 weeks and who agreed to a 2-week follow-up were included (n = 731). Women with known contraindications to medical abortions, Hb < 85 mg/l and aged below 18 were excluded. Data were collected between April 2013 and August 2014 in six primary health-care clinics in Rajasthan. A computerised random number generator created the randomisation sequence (1:1) in blocks of six. Contraceptive use was measured at 2 weeks among women successfully followed-up (n = 623) and 3 months in the sub-set of women who were included if they were recruited at one of the urban study sites, owned a phone and agreed to a 3-month follow-up (n = 114). RESULTS: There were no differences between contraceptive use and continuation between study groups at 3 months (76 % clinic follow-up, 77 % home-assessment), however women in the clinic follow-up group were most likely to adopt a contraceptive method at 2 weeks (62 ± 12 %), while women in the home-assessment group were most likely to adopt a method after next menstruation (60 ± 13 %). Fifty-two per cent of women who initiated a method at 2 weeks chose the 3-month injection or the copper intrauterine device. Only 4 % of women preferred sterilization. Caste, educational attainment, or type of residence did not influence contraceptive use. CONCLUSIONS: Simplified follow-up after early medical abortion will not change women's opportunities to access contraception in a low-resource setting, if contraceptive services are provided as intra-abortion services as early as on day one. Women's postabortion contraceptive use at 3 months is unlikely to be affected by mode of followup after medical abortion, also in a low-resource setting. Clinical guidelines need to encourage intra-abortion contraception, offering the full spectrum of evidence-based methods, especially long-acting reversible methods. TRIAL REGISTRATION: Clinicaltrials.gov NCT01827995.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the recent years, vibration-based structural damage identification has been subject of significant research in structural engineering. The basic idea of vibration-based methods is that damage induces mechanical properties changes that cause anomalies in the dynamic response of the structure, which measures allow to localize damage and its extension. Vibration measured data, such as frequencies and mode shapes, can be used in the Finite Element Model Updating in order to adjust structural parameters sensible at damage (e.g. Young’s Modulus). The novel aspect of this thesis is the introduction into the objective function of accurate measures of strains mode shapes, evaluated through FBG sensors. After a review of the relevant literature, the case of study, i.e. an irregular prestressed concrete beam destined for roofing of industrial structures, will be presented. The mathematical model was built through FE models, studying static and dynamic behaviour of the element. Another analytical model was developed, based on the ‘Ritz method’, in order to investigate the possible interaction between the RC beam and the steel supporting table used for testing. Experimental data, recorded through the contemporary use of different measurement techniques (optical fibers, accelerometers, LVDTs) were compared whit theoretical data, allowing to detect the best model, for which have been outlined the settings for the updating procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: Mycological contamination of occupational environments can be a result of fungal spores’ dispersion in the air and on surfaces. Therefore, it is very important to assess it in both types of the samples. In the present study we assessed fungal contamination in the air and in the surface samples to show relevance of surfaces sampling in complementing the results obtained in the air samples. Material and Methods: In total, 42 settings were assessed by the analysis of air and surfaces samples. The settings were divided into settings with a high fungal load (7 poultry farms and 7 pig farms, 3 cork industries, 3 waste management plants, 2 wastewater treatment plants and 1 horse stable) and a low fungal load (10 hospital canteens, 8 college canteens and 1 maternity hospital). In addition to culture-based methods, molecular tools were also applied to detect fungal burden in the settings with a higher fungal load. Results: From the 218 sampling sites, 140 (64.2%) presented different species in the examined surfaces when compared with the species identified in the air. A positive association in the high fungal load settings was found between the presence of different species in the air and surfaces. Wastewater treatment plants constituted the setting with the highest number of different species between the air and surface. Conclusions: We observed that surfaces sampling and application of molecular tools showed the same efficacy of species detection in high fungal load settings, corroborating the fact that surface sampling is crucial for a correct and complete analysis of occupational scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Perturbation of natural ecosystems, namely by increasing freshwater use and its degradative use, as well as topsoil erosion by water of land-use production systems, have been emerging as topics of high environmental concern. Freshwater use has become a focus of attention in the last few years for all stakeholders involved in the production of goods, mainly agro-industrial and forest-based products, which are freshwater-intensive consumers, requiring large inputs of green and blue water. This thesis presents a global review on the available Water Footprint Assessment and Life Cycle Assessment (LCA)-based methods for measuring and assessing the environmental relevance of freshwater resources use, based on a life cycle perspective. Using some of the available midpoint LCA-based methods, the freshwater use-related impacts of a Portuguese wine (white ‘vinho verde’) were assessed. However, the relevance of environmental green water has been neglected because of the absence of a comprehensive impact assessment method associated with green water flows. To overcome this constraint, this thesis helps to improve and enhance the LCA-based methods by providing a midpoint and spatially explicit Life Cycle Impact Assessment (LCIA) method for assessing impacts on terrestrial green water flow and addressing reductions in surface blue water production caused by reductions in surface runoff due to land-use production systems. The applicability of the proposed method is illustrated by a case study on Eucalyptus globulus conducted in Portugal, as the growth of short rotation forestry is largely dependent on local precipitation. Topsoil erosion by water has been characterised as one of the most upsetting problems for rivers. Because of this, this thesis also focuses on the ecosystem impacts caused by suspended solids (SS) from topsoil erosion that reach freshwater systems. A framework to conduct a spatially distributed SS delivery to freshwater streams and a fate and effect LCIA method to derive site-specific characterisation factors (CFs) for endpoint damage on aquatic ecosystem diversity, namely on algae, macrophyte, and macroinvertebrates organisms, were developed. The applicability of this framework, combined with the derived site-specific CFs, is shown by conducting a case study on E. globulus stands located in Portugal as an example of a land use based system. A spatially explicit LCA assessment was shown to be necessary, since the impacts associated with both green water flows and SS vary greatly as a function of spatial location.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the author(s) of a biomedical publication, or implicit, such as the positive or negative sentiment that an author had when she wrote a product review; there may also be complex context such as the social network of the authors. Many applications require analysis of topic patterns over different contexts. For instance, analysis of search logs in the context of the user can reveal how we can improve the quality of a search engine by optimizing the search results according to particular users; analysis of customer reviews in the context of positive and negative sentiments can help the user summarize public opinions about a product; analysis of blogs or scientific publications in the context of a social network can facilitate discovery of more meaningful topical communities. Since context information significantly affects the choices of topics and language made by authors, in general, it is very important to incorporate it into analyzing and mining text data. In general, modeling the context in text, discovering contextual patterns of language units and topics from text, a general task which we refer to as Contextual Text Mining, has widespread applications in text mining. In this thesis, we provide a novel and systematic study of contextual text mining, which is a new paradigm of text mining treating context information as the ``first-class citizen.'' We formally define the problem of contextual text mining and its basic tasks, and propose a general framework for contextual text mining based on generative modeling of text. This conceptual framework provides general guidance on text mining problems with context information and can be instantiated into many real tasks, including the general problem of contextual topic analysis. We formally present a functional framework for contextual topic analysis, with a general contextual topic model and its various versions, which can effectively solve the text mining problems in a lot of real world applications. We further introduce general components of contextual topic analysis, by adding priors to contextual topic models to incorporate prior knowledge, regularizing contextual topic models with dependency structure of context, and postprocessing contextual patterns to extract refined patterns. The refinements on the general contextual topic model naturally lead to a variety of probabilistic models which incorporate different types of context and various assumptions and constraints. These special versions of the contextual topic model are proved effective in a variety of real applications involving topics and explicit contexts, implicit contexts, and complex contexts. We then introduce a postprocessing procedure for contextual patterns, by generating meaningful labels for multinomial context models. This method provides a general way to interpret text mining results for real users. By applying contextual text mining in the ``context'' of other text information management tasks, including ad hoc text retrieval and web search, we further prove the effectiveness of contextual text mining techniques in a quantitative way with large scale datasets. The framework of contextual text mining not only unifies many explorations of text analysis with context information, but also opens up many new possibilities for future research directions in text mining.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Relationship between organisms within an ecosystem is one of the main focuses in the study of ecology and evolution. For instance, host-parasite interactions have long been under close interest of ecology, evolutionary biology and conservation science, due to great variety of strategies and interaction outcomes. The monogenean ecto-parasites consist of a significant portion of flatworms. Gyrodactylus salaris is a monogenean freshwater ecto-parasite of Atlantic salmon (Salmo salar) whose damage can make fish to be prone to further bacterial and fungal infections. G. salaris is the only one parasite whose genome has been studied so far. The RNA-seq data analyzed in this thesis has already been annotated by using LAST. The RNA-seq data was obtained from Illumina sequencing i.e. yielded reads were assembled into 15777 transcripts. Last resulted in annotation of 46% transcripts and remaining were left unknown. This thesis work was started with whole data and annotation process was continued by the use of PANNZER, CDD and InterProScan. This annotation resulted in 56% successfully annotated sequences having parasite specific proteins identified. This thesis represents the first of Monogenean transcriptomic information which gives an important source for further research on this specie. Additionally, comparison of annotation methods interestingly revealed that description and domain based methods perform better than simple similarity search methods. Therefore it is more likely to suggest the use of these tools and databases for functional annotation. These results also emphasize the need for use of multiple methods and databases. It also highlights the need of more genomic information related to G. salaris.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents the analysis of wave and turbulence measurements collected at a tidal energy site. A new method is introduced to produce more consistent and rigorous estimations of the velocity fluctuations power spectral densities. An analytical function is further proposed to fit the observed spectra and could be input to the numerical models predicting power production and structural loading on tidal turbines. Another new approach is developed to correct for the effect of the Doppler noise on the high frequencies power spectral densities. The analysis of velocity time series combining wave and turbulent contributions demonstrates that the turbulent motions are coherent throughout the water column, rendering the wave coherence-based methods not applicable to our dataset. To avoid this problem, an alternative approach relying on the pressure data collected by the ADCP is introduced and shows appreciable improvement in the wave-turbulence separation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seafood products fraud, the misrepresentation of them, have been discovered all around the world in different forms as false labeling, species substitution, short-weighting or over glazing in order to hide the correct identity, origin or weight of the seafood products. Due to the value of seafood products such as canned tuna, swordfish or grouper, these species are the subject of the commercial fraud is mainly there placement of valuable species with other little or no value species. A similar situation occurs with the shelled shrimp or shellfish that are reduced into pieces for the commercialization. Food fraud by species substitution is an emerging risk given the increasingly global food supply chain and the potential food safety issues. Economic food fraud is committed when food is deliberately placed on the market, for financial gain deceiving consumers (Woolfe, M. & Primrose, S. 2004). As a result of the increased demand and the globalization of the seafood supply, more fish species are encountered in the market. In this scenary, it becomes essential to unequivocally identify the species. The traditional taxonomy, based primarily on identification keys of species, has shown a number of limitations in the use of the distinctive features in many animal taxa, amplified when fish, crustacean or shellfish are commercially transformed. Many fish species show a similar texture, thus the certification of fish products is particularly important when fishes have undergone procedures which affect the overall anatomical structure, such as heading, slicing or filleting (Marko et al., 2004). The absence of morphological traits, a main characteristic usually used to identify animal species, represents a challenge and molecular identification methods are required. Among them, DNA-based methods are more frequently employed for food authentication (Lockley & Bardsley, 2000). In addition to food authentication and traceability, studies of taxonomy, population and conservation genetics as well as analysis of dietary habits and prey selection, also rely on genetic analyses including the DNA barcoding technology (Arroyave & Stiassny, 2014; Galimberti et al., 2013; Mafra, Ferreira, & Oliveira, 2008; Nicolé et al., 2012; Rasmussen & Morrissey, 2008), consisting in PCR amplification and sequencing of a COI mitochondrial gene specific region. The system proposed by P. Hebert et al. (2003) locates inside the mitochondrial COI gene (cytochrome oxidase subunit I) the bioidentification system useful in taxonomic identification of species (Lo Brutto et al., 2007). The COI region, used for genetic identification - DNA barcode - is short enough to allow, with the current technology, to decode sequence (the pairs of nucleotide bases) in a single step. Despite, this region only represents a tiny fraction of the mitochondrial DNA content in each cell, the COI region has sufficient variability to distinguish the majority of species among them (Biondo et al. 2016). This technique has been already employed to address the demand of assessing the actual identity and/or provenance of marketed products, as well as to unmask mislabelling and fraudulent substitutions, difficult to detect especially in manufactured seafood (Barbuto et al., 2010; Galimberti et al., 2013; Filonzi, Chiesa, Vaghi, & Nonnis Marzano, 2010). Nowadays,the research concerns the use of genetic markers to identify not only the species and/or varieties of fish, but also to identify molecular characters able to trace the origin and to provide an effective control tool forproducers and consumers as a supply chain in agreementwith local regulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents our work on analysing the high level search within a graph based hyperheuristic. The graph based hyperheuristic solves the problem at a higher level by searching through permutations of graph heuristics rather than the actual solutions. The heuristic permutations are then used to construct the solutions. Variable Neighborhood Search, Steepest Descent, Iterated Local Search and Tabu Search are compared. An analysis of their performance within the high level search space of heuristics is also carried out. Experimental results on benchmark exam timetabling problems demonstrate the simplicity and efficiency of this hyperheuristic approach. They also indicate that the choice of the high level search methodology is not crucial and the high level search should explore the heuristic search space as widely as possible within a limited searching time. This simple and general graph based hyperheuristic may be applied to a range of timetabling and optimisation problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents our work on analysing the high level search within a graph based hyperheuristic. The graph based hyperheuristic solves the problem at a higher level by searching through permutations of graph heuristics rather than the actual solutions. The heuristic permutations are then used to construct the solutions. Variable Neighborhood Search, Steepest Descent, Iterated Local Search and Tabu Search are compared. An analysis of their performance within the high level search space of heuristics is also carried out. Experimental results on benchmark exam timetabling problems demonstrate the simplicity and efficiency of this hyperheuristic approach. They also indicate that the choice of the high level search methodology is not crucial and the high level search should explore the heuristic search space as widely as possible within a limited searching time. This simple and general graph based hyperheuristic may be applied to a range of timetabling and optimisation problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.