38 resultados para Graph cuts

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Existing texture synthesis-from-example strategies for polygon meshes typically make use of three components: a multi-resolution mesh hierarchy that allows the overall nature of the pattern to be reproduced before filling in detail; a matching strategy that extends the synthesized texture using the best fit from a texture sample; and a transfer mechanism that copies the selected portion of the texture sample to the target surface. We introduce novel alternatives for each of these components. Use of p2-subdivision surfaces provides the mesh hierarchy and allows fine control over the surface complexity. Adaptive subdivision is used to create an even vertex distribution over the surface. Use of the graph defined by a surface region for matching, rather than a regular texture neighbourhood, provides for flexible control over the scale of the texture and allows simultaneous matching against multiple levels of an image pyramid created from the texture sample. We use graph cuts for texture transfer, adapting this scheme to the context of surface synthesis. The resulting surface textures are realistic, tolerant of local mesh detail and are comparable to results produced by texture neighbourhood sampling approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Object segmentation is widely recognized as one of the most challenging problems in computer vision. One major problem of existing methods is that most of them are vulnerable to the cluttered background. Moreover, human intervention is often required to specify foreground/background priors, which restricts the usage of object segmentation in real-world scenario. To address these problems, we propose a novel approach to learn complementary saliency priors for foreground object segmentation in complex scenes. Different from existing saliency-based segmentation approaches, we propose to learn two complementary saliency maps that reveal the most reliable foreground and background regions. Given such priors, foreground object segmentation is formulated as a binary pixel labelling problem that can be efficiently solved using graph cuts. As such, the confident saliency priors can be utilized to extract the most salient objects and reduce the distraction of cluttered background. Extensive experiments show that our approach outperforms 16 state-of-the-art methods remarkably on three public image benchmarks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The maximum a posteriori assignment for general structure Markov random fields is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS), takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhoods using a limited memory without any requirement on the cost functions. We evaluate the T-ILS on a simulated Ising model and two real-world vision problems: stereo matching and image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with significant computational gain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A critical question in data mining is that can we always trust what discovered by a data mining system unconditionally? The answer is obviously not. If not, when can we trust the discovery then? What are the factors that affect the reliability of the discovery? How do they affect the reliability of the discovery? These are some interesting questions to be investigated.

In this paper we will firstly provide a definition and the measurements of reliability, and analyse the factors that affect the reliability. We then examine the impact of model complexity, weak links, varying sample sizes and the ability of different learners to the reliability of graphical model discovery. The experimental results reveal that (1) the larger sample size for the discovery, the higher reliability we will get; (2) the stronger a graph link is, the easier the discovery will be and thus the higher the reliability it can achieve; (3) the complexity of a graph also plays an important role in the discovery. The higher the complexity of a graph is, the more difficult to induce the graph and the lower reliability it would be.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the polytope of the minimum-span graph labelling problems with integer distance constraints (DC-MSGL). We first introduce a few classes of new valid inequalities for the DC-MSGL defined on general graphs and briefly discuss the separation problems of some of these inequalities. These are the initial steps of a branch-and-cut algorithm for solving the DC-MSGL. Following that, we present our polyhedral results on the dimension of the DC-MSGL polytope, and that some of the inequalities are facet defining, under reasonable conditions, for the polytope of the DC-MSGL on triangular graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The influence of feeding systems on the levels of functional lipids and other fatty acid concentrations in Australian beef was examined. Rump, strip loin and blade cuts obtained from grass feeding, short-term grain feeding (80 days; STGF) and long-term grain feedlot rations (150-200 days; LTFL) were used in the present study. The typical Australian feedlot ration contains more than 50% barley and/or sorghum and balanced with whole cottonseed and protein meals were used as feed for STGF and LTFL regimens. Meat cuts from 18 cattle for each feeding regimen were trimmed of visible fat and  connective tissue and then minced (300 g lean beef); replicate samples of 7g were used for fatty acid (FA) analysis. There was a significantly higher level of total omega-3 (n-3) and long chain n-3 FA in grass-fed beef (P <0.0001) than the grain-fed groups regardless of cut types. Cuts from STGF beef had significantly reduced levels of n-3 FA and conjugated linoleic acid (CLA) and similar levels of saturated, monounsaturated and n-6 FA compared with grass feeding (P <0.001). Cuts from LTFL beef had higher levels of saturated, monounsaturated, n-6 FA and trans 18:1 than similar  cuts from the other two groups (P <0.01), indicating that increased length of grain feeding was associated with more fat deposited in the carcass. There was a step-wise increase in trans 18:1 content from grass to STGF to LTGF, suggesting grain feeding elevates trans FA in beef, probably because of increased intake of 18:2n-6. Only grass-fed beef reached the target of more than 30mg of long chain n-3 FA/100 g muscle as recommended by Food Standard Australia and New Zealand for a food to be considered a source of omega- 3 fatty acids. The proportions of trans 18:1 and n-6 FA were higher (P<0.001) for both grain-fed beef groups than grass-fed beef. Data from the present study show that grain feeding decreases functional lipid  components (long chain n-3 FA and CLA) in Australian beef regardless of meat cuts, while increasing total trans 18:1 and saturated FA levels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lung modelling has emerged as a useful method for diagnosing lung diseases. Image segmentation is an important part of lung modelling systems. The ill-defined nature of image segmentation makes automated lung modelling difficult. Also, low resolution of lung images further increases the difficulty of the lung image segmentation. It is therefore important to identify a suitable segmentation algorithm that can enhance lung modelling accuracies. This paper investigates six image segmentation algorithms, used in medical imaging, and also their application to lung modelling. The algorithms are: normalised cuts, graph, region growing, watershed, Markov random field, and mean shift. The performance of the six segmentation algorithms is determined through a set of experiments on realistic 2D CT lung images. An experimental procedure is devised to measure the performance of the tested algorithms. The measured segmentation accuracies as well as execution times of the six algorithms are then compared and discussed.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Funnel graphs provide a simple, yet highly effective, means to identify key features of an empirical literature. This paper illustrates the use of funnel graphs to detect publication selection bias, identify the existence of genuine empirical effects and discover potential moderator variables that can help to explain the wide variation routinely found among reported research findings. Applications include union–productivity effects, water price elasticities, common currency-trade effects, minimum-wage employment effects, efficiency wages and the price elasticity of prescription drugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study of the cholesterol content and fatty acid composition of fresh retail Australian pork was undertaken to determine whether new breeding, feeding and processing methods had resulted in any compositional changes in fresh pork in the market place since surveys undertaken in previous decades. Samples of 13 popular pork cuts were purchased from randomly selected supermarkets and butchers’ stores in urban areas across the socioeconomic scale in three States of Australia, and analysed, separable fat and separable lean, in late 2005 and early 2006. Variability was low across States for saturated and monounsaturated fatty acids, but more pronounced for polyunsaturated acids. The separable lean portions of all pork cuts contained levels of n-3 fatty acids and conjugated linoleic acid (C18:1c9t11) in measurable but not nutritionally claimable amounts, whilst total trans fatty acid levels were very low. There appeared to be some differences in fatty acid composition across States that may have resulted from feeding method. Cholesterol contents were similar to levels in the 80s and 90s for separable lean pork tissue, but presently are lower for separable fat tissue than for separable lean.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As one of the primary substances in a living organism, protein defines the character of each cell by interacting with the cellular environment to promote the cell’s growth and function [1]. Previous studies on proteomics indicate that the functions of different proteins could be assigned based upon protein structures [2,3]. The knowledge on protein structures gives us an overview of protein fold space and is helpful for the understanding of the evolutionary principles behind structure. By observing the architectures and topologies of the protein families, biological processes can be investigated more directly with much higher resolution and finer detail. For this reason, the analysis of protein, its structure and the interaction with the other materials is emerging as an important problem in bioinformatics. However, the determination of protein structures is experimentally expensive and time consuming, this makes scientists largely dependent on sequence rather than more general structure to infer the function of the protein at the present time. For this reason, data mining technology is introduced into this area to provide more efficient data processing and knowledge discovery approaches.

Unlike many data mining applications which lack available data, the protein structure determination problem and its interaction study, on the contrary, could utilize a vast amount of biologically relevant information on protein and its interaction, such as the protein data bank (PDB) [4], the structural classification of proteins (SCOP) databases [5], CATH databases [6], UniProt [7], and others. The difficulty of predicting protein structures, specially its 3D structures, and the interactions between proteins as shown in Figure 6.1, lies in the computational complexity of the data. Although a large number of approaches have been developed to determine the protein structures such as ab initio modelling [8], homology modelling [9] and threading [10], more efficient and reliable methods are still greatly needed.

In this chapter, we will introduce a state-of-the-art data mining technique, graph mining, which is good at defining and discovering interesting structural patterns in graphical data sets, and take advantage of its expressive power to study protein structures, including protein structure prediction and comparison, and protein-protein interaction (PPI). The current graph pattern mining methods will be described, and typical algorithms will be presented, together with their applications in the protein structure analysis.

The rest of the chapter is organized as follows: Section 6.2 will give a brief introduction of the fundamental knowledge of protein, the publicly accessible protein data resources and the current research status of protein analysis; in Section 6.3, we will pay attention to one of the state-of-the-art data mining methods, graph mining; then Section 6.4 surveys several existing work for protein structure analysis using advanced graph mining methods in the recent decade; finally, in Section 6.5, a conclusion with potential further work will be summarized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A critical question in data mining is that can we always trust what discovered by a data mining system unconditionally? The answer is obviously not. If not, when can we trust the discovery then? What are the factors that affect the reliability of the discovery? How do they affect the reliability of the discovery? These are some interesting questions to be investigated. In this chapter we will firstly provide a definition and the measurements of reliability, and analyse the factors that affect the reliability. We then examine the impact of model complexity, weak links, varying sample sizes and the ability of different learners to the reliability of graphical model discovery. The experimental results reveal that (1) the larger sample size for the discovery, the higher reliability we will get; (2) the stronger a graph link is, the easier the discovery will be and thus the higher the reliability it can achieve; (3) the complexity of a graph also plays an important role in the discovery. The higher the complexity of a graph is, the more difficult to induce the graph and the lower reliability it would be. We also examined the performance difference of different discovery algorithms. This reveals the impact of discovery process. The experimental results show the superior reliability and robustness of MML method to standard significance tests in the recovery of graph links with small samples and weak links.