241 resultados para Heterogeneous


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successful prediction of groundwater flow and solute transport through highly heterogeneous aquifers has remained elusive due to the limitations of methods to characterize hydraulic conductivity (K) and generate realistic stochastic fields from such data. As a result, many studies have suggested that the classical advective-dispersive equation (ADE) cannot reproduce such transport behavior. Here we demonstrate that when high-resolution K data are used with a fractal stochastic method that produces K fields with adequate connectivity, the classical ADE can accurately predict solute transport at the macrodispersion experiment site in Mississippi. This development provides great promise to accurately predict contaminant plume migration, design more effective remediation schemes, and reduce environmental risks. Key Points Non-Gaussian transport behavior at the MADE site is unraveledADE can reproduce tracer transport in heterogeneous aquifers with no calibrationNew fractal method generates heterogeneous K fields with adequate connectivity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The autonomous capabilities in collaborative unmanned aircraft systems are growing rapidly. Without appropriate transparency, the effectiveness of the future multiple Unmanned Aerial Vehicle (UAV) management paradigm will be significantly limited by the human agent’s cognitive abilities; where the operator’s CognitiveWorkload (CW) and Situation Awareness (SA) will present as disproportionate. This proposes a challenge in evaluating the impact of robot autonomous capability feedback, allowing the human agent greater transparency into the robot’s autonomous status - in a supervisory role. This paper presents; the motivation, aim, related works, experiment theory, methodology, results and discussions, and the future work succeeding this preliminary study. The results in this paper illustrates that, with a greater transparency of a UAV’s autonomous capability, an overall improvement in the subjects’ cognitive abilities was evident, that is, with a confidence of 95%, the test subjects’ mean CW was demonstrated to have a statistically significant reduction, while their mean SA was demonstrated to have a significant increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work addresses fundamental issues in the mathematical modelling of the diffusive motion of particles in biological and physiological settings. New mathematical results are proved and implemented in computer models for the colonisation of the embryonic gut by neural cells and the propagation of electrical waves in the heart, offering new insights into the relationships between structure and function. In particular, the thesis focuses on the use of non-local differential operators of non-integer order to capture the main features of diffusion processes occurring in complex spatial structures characterised by high levels of heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invasive non-native plants have negatively impacted on biodiversity and ecosystem functions world-wide. Because of the large number of species, their wide distributions and varying degrees of impact, we need a more effective method for prioritizing control strategies for cost-effective investment across heterogeneous landscapes. Here, we develop a prioritization framework that synthesizes scientific data, elicits knowledge from experts and stakeholders to identify control strategies, and appraises the cost-effectiveness of strategies. Our objective was to identify the most cost-effective strategies for reducing the total area dominated by high-impact non-native plants in the Lake Eyre Basin (LEB). We use a case study of the ˜120 million ha Lake Eyre Basin that comprises some of the most distinctive Australian landscapes, including Uluru-Kata Tjuta National Park. More than 240 non-native plant species are recorded in the Lake Eyre Basin, with many predicted to spread, but there are insufficient resources to control all species. Lake Eyre Basin experts identified 12 strategies to control, contain or eradicate non-native species over the next 50 years. The total cost of the proposed Lake Eyre Basin strategies was estimated at AU$1·7 billion, an average of AU$34 million annually. Implementation of these strategies is estimated to reduce non-native plant dominance by 17 million ha – there would be a 32% reduction in the likely area dominated by non-native plants within 50 years if these strategies were implemented. The three most cost-effective strategies were controlling Parkinsonia aculeata, Ziziphus mauritiana and Prosopis spp. These three strategies combined were estimated to cost only 0·01% of total cost of all the strategies, but would provide 20% of the total benefits. Over 50 years, cost-effective spending of AU$2·3 million could eradicate all non-native plant species from the only threatened ecological community within the Lake Eyre Basin, the Great Artesian Basin discharge springs. Synthesis and applications. Our framework, based on a case study of the ˜120 million ha Lake Eyre Basin in Australia, provides a rationale for financially efficient investment in non-native plant management and reveals combinations of strategies that are optimal for different budgets. It also highlights knowledge gaps and incidental findings that could improve effective management of non-native plants, for example addressing the reliability of species distribution data and prevalence of information sharing across states and regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interest in the area of collaborative Unmanned Aerial Vehicles (UAVs) in a Multi-Agent System is growing to compliment the strengths and weaknesses of the human-machine relationship. To achieve effective management of multiple heterogeneous UAVs, the status model of the agents must be communicated to each other. This paper presents the effects on operator Cognitive Workload (CW), Situation Awareness (SA), trust and performance by increasing the autonomy capability transparency through text-based communication of the UAVs to the human agents. The results revealed a reduction in CW, increase in SA, increase in the Competence, Predictability and Reliability dimensions of trust, and the operator performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tridiagonal diagonally dominant linear systems arise in many scientific and engineering applications. The standard Thomas algorithm for solving such systems is inherently serial forming a bottleneck in computation. Algorithms such as cyclic reduction and SPIKE reduce a single large tridiagonal system into multiple small independent systems which can be solved in parallel. We have developed portable cyclic reduction and SPIKE algorithm OpenCL implementations with the intent to target a range of co-processors in a heterogeneous computing environment including Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs) and other multi-core processors. In this paper, we evaluate these designs in the context of solver performance, resource efficiency and numerical accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in both men and women. Research into causes, prevention and treatment of lung cancer is ongoing and much progress has been made recently in these areas, however survival rates have not significantly improved. Therefore, it is essential to develop biomarkers for early diagnosis of lung cancer, prediction of metastasis and evaluation of treatment efficiency, as well as using these molecules to provide some understanding about tumour biology and translate highly promising findings in basic science research to clinical application. In this investigation, two-dimensional difference gel electrophoresis and mass spectrometry were initially used to analyse conditioned media from a panel of lung cancer and normal bronchial epithelial cell lines. Significant proteins were identified with heterogeneous nuclear ribonucleoprotein A2B1 (hnRNPA2B1), pyruvate kinase M2 isoform (PKM2), Hsc-70 interacting protein and lactate dehydrogenase A (LDHA) selected for analysis in serum from healthy individuals and lung cancer patients. hnRNPA2B1, PKM2 and LDHA were found to be statistically significant in all comparisons. Tissue analysis and knockdown of hnRNPA2B1 using siRNA subsequently demonstrated both the overexpression and potential role for this molecule in lung tumorigenesis. The data presented highlights a number of in vitro derived candidate biomarkers subsequently verified in patient samples and also provides some insight into their roles in the complex intracellular mechanisms associated with tumour progression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The following paper considers the question, where to office property? In doing so, it focuses, in the first instance, on identifying and describing a selection of key forces for change present within the contemporary operating environment in which office property functions. Given the increasingly complex, dynamic and multi-faceted character of this environment, the paper seeks to identify only the primary forces for change, within the context of the future of office property. These core drivers of change have, for the purposes of this discussion, been characterised as including a range of economic, demographic and socio-cultural factors, together with developments in information and communication technology. Having established this foundation, the paper proceeds to consider the manner in which these forces may, in the future, be manifested within the office property market. Comment is offered regarding the potential future implications of these forces for change together with their likely influence on the nature and management of the physical asset itself. Whilst no explicit time horizon has been envisioned in the preparation of this paper particular attention has been accorded short to medium term trends, that is, those likely to emerge in the office property marketplace over the coming two decades. Further, the paper considers the question posed, in respect of the future of office property, in the context of developed western nations. The degree of commonality seen in these mature markets is such that generalisations may more appropriately and robustly be applied. Whilst some of the comments offered with respect to the target market may find application in other arenas, it is beyond the scope of this paper to explicitly consider highly heterogeneous markets. Given also the wide scope of this paper key drivers for change and their likely implications for the commercial office property market are identified at a global level (within the above established parameters). Accordingly, the focus is necessarily such that it serves to reflect overarching directions at a universal level (with the effect being that direct applicability to individual markets - when viewed in isolation on a geographic or property type specific basis – may not be fitting in all instances)