66 resultados para heterogeneous habitat
Resumo:
Interest in the area of collaborative Unmanned Aerial Vehicles (UAVs) in a Multi-Agent System is growing to compliment the strengths and weaknesses of the human-machine relationship. To achieve effective management of multiple heterogeneous UAVs, the status model of the agents must be communicated to each other. This paper presents the effects on operator Cognitive Workload (CW), Situation Awareness (SA), trust and performance by increasing the autonomy capability transparency through text-based communication of the UAVs to the human agents. The results revealed a reduction in CW, increase in SA, increase in the Competence, Predictability and Reliability dimensions of trust, and the operator performance.
Resumo:
Tridiagonal diagonally dominant linear systems arise in many scientific and engineering applications. The standard Thomas algorithm for solving such systems is inherently serial forming a bottleneck in computation. Algorithms such as cyclic reduction and SPIKE reduce a single large tridiagonal system into multiple small independent systems which can be solved in parallel. We have developed portable cyclic reduction and SPIKE algorithm OpenCL implementations with the intent to target a range of co-processors in a heterogeneous computing environment including Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs) and other multi-core processors. In this paper, we evaluate these designs in the context of solver performance, resource efficiency and numerical accuracy.
Resumo:
- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.
Resumo:
Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.
Resumo:
Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.
Resumo:
Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in both men and women. Research into causes, prevention and treatment of lung cancer is ongoing and much progress has been made recently in these areas, however survival rates have not significantly improved. Therefore, it is essential to develop biomarkers for early diagnosis of lung cancer, prediction of metastasis and evaluation of treatment efficiency, as well as using these molecules to provide some understanding about tumour biology and translate highly promising findings in basic science research to clinical application. In this investigation, two-dimensional difference gel electrophoresis and mass spectrometry were initially used to analyse conditioned media from a panel of lung cancer and normal bronchial epithelial cell lines. Significant proteins were identified with heterogeneous nuclear ribonucleoprotein A2B1 (hnRNPA2B1), pyruvate kinase M2 isoform (PKM2), Hsc-70 interacting protein and lactate dehydrogenase A (LDHA) selected for analysis in serum from healthy individuals and lung cancer patients. hnRNPA2B1, PKM2 and LDHA were found to be statistically significant in all comparisons. Tissue analysis and knockdown of hnRNPA2B1 using siRNA subsequently demonstrated both the overexpression and potential role for this molecule in lung tumorigenesis. The data presented highlights a number of in vitro derived candidate biomarkers subsequently verified in patient samples and also provides some insight into their roles in the complex intracellular mechanisms associated with tumour progression.