880 resultados para Automated segmentation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel methodology has been developed to quantify important saltwater intrusion parameters in a sandbox style experiment using image analysis. Existing methods found in the literature are based mainly on visual observations, which are subjective, labour intensive and limits the temporal and spatial resolutions that can be analysed. A robust error analysis was undertaken to determine the optimum methodology to convert image light intensity to concentration. Results showed that defining a relationship on a pixel-wise basis provided the most accurate image to concentration conversion and allowed quantification of the width of mixing zone between the saltwater and freshwater. A large image sample rate was used to investigate the transient dynamics of saltwater intrusion, which rendered analysis by visual observation unsuitable. This paper presents the methodologies developed to minimise human input and promote autonomy, provide high resolution image to concentration conversion and allow the quantification of intrusion parameters under transient conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An automated solar reactor system was designed and built to carry out catalytic pyrolysis of scrap rubber tires at 550°C. To maximize solar energy concentration, a two degrees-of-freedom automated sun tracking system was developed and implemented. Both the azimuth and zenith angles were controlled via feedback from six photo-resistors positioned on a Fresnel lens. The pyrolysis of rubber tires was tested with the presence of two types of acidic catalysts, H-beta and H-USY. Additionally, a photoactive TiO<inf>2</inf> catalyst was used and the products were compared in terms of gas yields and composition. The catalysts were characterized by BET analysis and the pyrolysis gases and liquids were analyzed using GC-MS. The oil and gas yields were relatively high with the highest gas yield reaching 32.8% with H-beta catalyst while TiO<inf>2</inf> gave the same results as thermal pyrolysis without any catalyst. In the presence of zeolites, the dominant gasoline-like components in the gas were propene and cyclobutene. The TiO<inf>2</inf> and non-catalytic experiments produced a gas containing gasoline-like products of mainly isoprene (76.4% and 88.4% respectively). As for the liquids they were composed of numerous components spread over a wide distribution of C<inf>10</inf> to C<inf>29</inf> hydrocarbons of naphthalene and cyclohexane/ene derivatives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p < 0.001) with benchmark tumor cell counts. This study demonstrates a robust image analysis technology that can facilitate the automated quantitative analysis of tissue samples for molecular profiling in discovery and diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimizing manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar
traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of segmenting text documents that have a
two-part structure such as a problem part and a solution part. Documents
of this genre include incident reports that typically involve
description of events relating to a problem followed by those pertaining
to the solution that was tried. Segmenting such documents
into the component two parts would render them usable in knowledge
reuse frameworks such as Case-Based Reasoning. This segmentation
problem presents a hard case for traditional text segmentation
due to the lexical inter-relatedness of the segments. We develop
a two-part segmentation technique that can harness a corpus
of similar documents to model the behavior of the two segments
and their inter-relatedness using language models and translation
models respectively. In particular, we use separate language models
for the problem and solution segment types, whereas the interrelatedness
between segment types is modeled using an IBM Model
1 translation model. We model documents as being generated starting
from the problem part that comprises of words sampled from
the problem language model, followed by the solution part whose
words are sampled either from the solution language model or from
a translation model conditioned on the words already chosen in the
problem part. We show, through an extensive set of experiments on
real-world data, that our approach outperforms the state-of-the-art
text segmentation algorithms in the accuracy of segmentation, and
that such improved accuracy translates well to improved usability
in Case-based Reasoning systems. We also analyze the robustness
of our technique to varying amounts and types of noise and empirically
illustrate that our technique is quite noise tolerant, and
degrades gracefully with increasing amounts of noise

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The popularity of tri-axial accelerometer data loggers to quantify animal activity through the analysis of signature traces is increasing. However, there is no consensus on how to process the large data sets that these devices generate when recording at the necessary high sample rates. In addition, there have been few attempts to validate accelerometer traces with specific behaviours in non-domesticated terrestrial mammals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an automated design framework for the development of individual part forming tools for a composite stiffener. The framework uses parametrically developed design geometries for both the part and its layup tool. The framework has been developed with a functioning user interface where part / tool combinations are passed to a virtual environment for utility based assessment of their features and assemblability characteristics. The work demonstrates clear benefits in process design methods with conventional design timelines reduced from hours and days to minutes and seconds. The methods developed here were able to produce a digital mock up of a component with its associated layup tool in less than 3 minutes. The virtual environment presenting the design to the designer for interactive assembly planning was generated in 20 seconds. Challenges still exist in determining the level of reality required to provide an effective learning environment in the virtual world. Full representation of physical phenomena such as gravity, part clashes and the representation of standard build functions require further work to represent real physical phenomena more accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The algorithm developed uses an octree pyramid in which noise is reduced at the expense of the spatial resolution. At a certain level an unsupervised clustering without spatial connectivity constraints is applied. After the classification, isolated voxels and insignificant regions are removed by assigning them to their neighbours. The spatial resolution is then increased by the downprojection of the regions, level by level. At each level the uncertainty of the boundary voxels is minimised by a dynamic selection and classification of these, using an adaptive 3D filtering. The algorithm is tested using different data sets, including NMR data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of seabed structure is important in a wide variety of scientific and industrial applications. In this paper, underwater acoustic data produced by bottom-penetrating sonar (Topas) are analyzed using unsupervised volumetric segmentation, based on a three dimensional Gibbs-Markov model. The result is a concise and accurate description of the seabed, in which key structures are emphasized. This description is also very well suited to further operations, such as the enhancement and automatic recognition of important structures. Experimental results demonstrating the effectiveness of this approach are shown, using Topas data gathered in the North Sea off Horten, Norway.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compare the effect of different text segmentation strategies on speech based passage retrieval of video. Passage retrieval has mainly been studied to improve document retrieval and to enable question answering. In these domains best results were obtained using passages defined by the paragraph structure of the source documents or by using arbitrary overlapping passages. For the retrieval of relevant passages in a video, using speech transcripts, no author defined segmentation is available. We compare retrieval results from 4 different types of segments based on the speech channel of the video: fixed length segments, a sliding window, semantically coherent segments and prosodic segments. We evaluated the methods on the corpus of the MediaEval 2011 Rich Speech Retrieval task. Our main conclusion is that the retrieval results highly depend on the right choice for the segment length. However, results using the segmentation into semantically coherent parts depend much less on the segment length. Especially, the quality of fixed length and sliding window segmentation drops fast when the segment length increases, while quality of the semantically coherent segments is much more stable. Thus, if coherent segments are defined, longer segments can be used and consequently less segments have to be considered at retrieval time.