118 resultados para Fully automated
Automated image analysis for experimental investigations of salt water intrusion in coastal aquifers
Resumo:
A novel methodology has been developed to quantify important saltwater intrusion parameters in a sandbox style experiment using image analysis. Existing methods found in the literature are based mainly on visual observations, which are subjective, labour intensive and limits the temporal and spatial resolutions that can be analysed. A robust error analysis was undertaken to determine the optimum methodology to convert image light intensity to concentration. Results showed that defining a relationship on a pixel-wise basis provided the most accurate image to concentration conversion and allowed quantification of the width of mixing zone between the saltwater and freshwater. A large image sample rate was used to investigate the transient dynamics of saltwater intrusion, which rendered analysis by visual observation unsuitable. This paper presents the methodologies developed to minimise human input and promote autonomy, provide high resolution image to concentration conversion and allow the quantification of intrusion parameters under transient conditions.
Resumo:
The combined effect of special relativity and electron degeneracy on Langmuir waves is analyzed by utilizing a rigorous fully relativistic hydrodynamic model. Assuming a traveling wave solution form, a set of conservation laws is identified, together with a pseudo-potential function depending on the relativistic parameter p<inf>F</inf>/(m c) (where p<inf>F</inf> is the Fermi momentum, m is the mass of the charge carriers and c the speed of light), as well as on the amplitude of the electrostatic energy perturbation.
Resumo:
The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.
Resumo:
An automated solar reactor system was designed and built to carry out catalytic pyrolysis of scrap rubber tires at 550°C. To maximize solar energy concentration, a two degrees-of-freedom automated sun tracking system was developed and implemented. Both the azimuth and zenith angles were controlled via feedback from six photo-resistors positioned on a Fresnel lens. The pyrolysis of rubber tires was tested with the presence of two types of acidic catalysts, H-beta and H-USY. Additionally, a photoactive TiO<inf>2</inf> catalyst was used and the products were compared in terms of gas yields and composition. The catalysts were characterized by BET analysis and the pyrolysis gases and liquids were analyzed using GC-MS. The oil and gas yields were relatively high with the highest gas yield reaching 32.8% with H-beta catalyst while TiO<inf>2</inf> gave the same results as thermal pyrolysis without any catalyst. In the presence of zeolites, the dominant gasoline-like components in the gas were propene and cyclobutene. The TiO<inf>2</inf> and non-catalytic experiments produced a gas containing gasoline-like products of mainly isoprene (76.4% and 88.4% respectively). As for the liquids they were composed of numerous components spread over a wide distribution of C<inf>10</inf> to C<inf>29</inf> hydrocarbons of naphthalene and cyclohexane/ene derivatives.
Resumo:
The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p < 0.001) with benchmark tumor cell counts. This study demonstrates a robust image analysis technology that can facilitate the automated quantitative analysis of tissue samples for molecular profiling in discovery and diagnostics.
Resumo:
Large integer multiplication is a major performance bottleneck in fully homomorphic encryption (FHE) schemes over the integers. In this paper two optimised multiplier architectures for large integer multiplication are proposed. The first of these is a low-latency hardware architecture of an integer-FFT multiplier. Secondly, the use of low Hamming weight (LHW) parameters is applied to create a novel hardware architecture for large integer multiplication in integer-based FHE schemes. The proposed architectures are implemented, verified and compared on the Xilinx Virtex-7 FPGA platform. Finally, the proposed implementations are employed to evaluate the large multiplication in the encryption step of FHE over the integers. The analysis shows a speed improvement factor of up to 26.2 for the low-latency design compared to the corresponding original integer-based FHE software implementation. When the proposed LHW architecture is combined with the low-latency integer-FFT accelerator to evaluate a single FHE encryption operation, the performance results show that a speed improvement by a factor of approximately 130 is possible.
Resumo:
We have designed software that can â€â€™look’’ at recorded ultrasound sequences. We analyzed fifteen video sequences representing recorded ultrasound scans of nine fetuses. Our method requires a small amount of user labelled pixels for processing the first frame. These initialize GrowCut 1 , a background removal algorithm, which was used for separating the fetus from its surrounding environment (segmentation). For each subsequent frame, user input is no longer necessary as some of the pixels will inherit labels from the previously processed frame. This results in our software’s ability to track movement. Two sonographers rated the results of our computer’s â€vision’ on a scale from 1 (poor fit) to 10 (excellent fit). They assessed tracking accuracy for the entire video as well as segmentation accuracy (the ability to identify fetus from non-fetus) for every 100th processed frame. There was no appreciable deterioration in the software’s ability to track the fetus over time. I
Resumo:
Despite the lack of a shear-rich tachocline region, low-mass fully convective (FC) stars are capable of generating strong magnetic fields, indicating that a dynamo mechanism fundamentally different from the solar dynamo is at work in these objects. We present a self-consistent three-dimensional model of magnetic field generation in low-mass FC stars. The model utilizes the anelastic magnetohydrodynamic equations to simulate compressible convection in a rotating sphere. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The Zeeman–Doppler-Imaging technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass FC stars.
Resumo:
This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimizing manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar
traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.
Resumo:
The popularity of tri-axial accelerometer data loggers to quantify animal activity through the analysis of signature traces is increasing. However, there is no consensus on how to process the large data sets that these devices generate when recording at the necessary high sample rates. In addition, there have been few attempts to validate accelerometer traces with specific behaviours in non-domesticated terrestrial mammals.
Resumo:
This paper presents an automated design framework for the development of individual part forming tools for a composite stiffener. The framework uses parametrically developed design geometries for both the part and its layup tool. The framework has been developed with a functioning user interface where part / tool combinations are passed to a virtual environment for utility based assessment of their features and assemblability characteristics. The work demonstrates clear benefits in process design methods with conventional design timelines reduced from hours and days to minutes and seconds. The methods developed here were able to produce a digital mock up of a component with its associated layup tool in less than 3 minutes. The virtual environment presenting the design to the designer for interactive assembly planning was generated in 20 seconds. Challenges still exist in determining the level of reality required to provide an effective learning environment in the virtual world. Full representation of physical phenomena such as gravity, part clashes and the representation of standard build functions require further work to represent real physical phenomena more accurately.
Resumo:
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.