12 resultados para Thai

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light and electron microscopy were used to characterize the structure of secretory cells and their products involved in attachment of two monogenean parasites of fish, in order to understand their role in the attachment process. In Bravohollisia rosetta and Bravohollisia gussevi, peduncular gland cells with two nuclei, granular endoplasmic reticulum, and Golgi bodies produce dual electron-dense (DED) secretory bodies with a homogenous electron-dense rind and a less electron-dense fibrillar core (oval and concave in B. rosetta and oval in B. gussevi). The DED secretory bodies are altered as they migrate from the gland cell to the haptoral reservoir, the superficial anchor grooves, and into the gill tissues. The contents of the DED secretory bodies are exocytosed into the reservoirs, fibrillar cores persisting in the matrix, some of which condense, forming highly electron-dense spherical bodies. Small, oval, electron-dense bodies occur in the grooves, while no inclusions are visible in the homogenous exudate within the gill tissues. The single tubular extension of the reservoir enters a bifurcate channel within the anchor via a concealed, crevice-like opening on one side of the anchor. The channel directs secretions into the left and the right grooves via concealed apertures. The secretions, introduced into the tissues by the anchors, probably assist in attachment. The secretions are manifested externally as net-like structures and observed in some cases to be still attached to the point of exudation, on anchors detached from the gill tissues. This suggests that despite having the anchors detached, the worms can still remain anchored to the gill tissues via these net-like structures. Based on this, it is postulated that the net-like secretions probably function as a safety line to anchor the worm during the onset of locomotion and in doing so reduce the risk of tearing host tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A total of 549 samples of rice, maize, wheat, sorghum and millet were obtained from markets in Ghana, the EU, US and Asia. Analysis of the samples, originating from 21 countries in 5 continents, helped to establish global mean trace element concentrations in grains: thus placing the Ghanaian data within a global context. Ghanaian rice was generally low in potentially toxic elements, but high in essential nutrient elements. Arsenic concentrations in rice from US (0.22 mg/kg) and Thailand (0.15 mg/kg) were higher than in Ghanaian rice (0.11 mg/kg). Percentage inorganic arsenic content of the latter (83%) was, however, higher than for US (42%) and Thai rice (67%). Total arsenic concentration in Ghanaian maize, sorghum and millet samples (0.01 mg/kg) was an order of magnitude lower than in Ghanaian rice, indicating that a shift from rice-centric to multigrain diets could help reduce health risks posed by dietary exposure to inorganic As. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.