96 resultados para Heterogeneous nanostructures
Resumo:
The autonomous capabilities in collaborative unmanned aircraft systems are growing rapidly. Without appropriate transparency, the effectiveness of the future multiple Unmanned Aerial Vehicle (UAV) management paradigm will be significantly limited by the human agent’s cognitive abilities; where the operator’s CognitiveWorkload (CW) and Situation Awareness (SA) will present as disproportionate. This proposes a challenge in evaluating the impact of robot autonomous capability feedback, allowing the human agent greater transparency into the robot’s autonomous status - in a supervisory role. This paper presents; the motivation, aim, related works, experiment theory, methodology, results and discussions, and the future work succeeding this preliminary study. The results in this paper illustrates that, with a greater transparency of a UAV’s autonomous capability, an overall improvement in the subjects’ cognitive abilities was evident, that is, with a confidence of 95%, the test subjects’ mean CW was demonstrated to have a statistically significant reduction, while their mean SA was demonstrated to have a significant increase.
Resumo:
The collisions between colloidal metal nanoparticles and a carbon electrode were explored as a dynamic method for the electrodeposition of a diverse range of electrocatalytically active Ag and Au nanostructures whose morphology is dominated by the electrostatic interaction between the charge of the nanoparticle and metal salt.
Resumo:
This work addresses fundamental issues in the mathematical modelling of the diffusive motion of particles in biological and physiological settings. New mathematical results are proved and implemented in computer models for the colonisation of the embryonic gut by neural cells and the propagation of electrical waves in the heart, offering new insights into the relationships between structure and function. In particular, the thesis focuses on the use of non-local differential operators of non-integer order to capture the main features of diffusion processes occurring in complex spatial structures characterised by high levels of heterogeneity.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.
Resumo:
Invasive non-native plants have negatively impacted on biodiversity and ecosystem functions world-wide. Because of the large number of species, their wide distributions and varying degrees of impact, we need a more effective method for prioritizing control strategies for cost-effective investment across heterogeneous landscapes. Here, we develop a prioritization framework that synthesizes scientific data, elicits knowledge from experts and stakeholders to identify control strategies, and appraises the cost-effectiveness of strategies. Our objective was to identify the most cost-effective strategies for reducing the total area dominated by high-impact non-native plants in the Lake Eyre Basin (LEB). We use a case study of the ˜120 million ha Lake Eyre Basin that comprises some of the most distinctive Australian landscapes, including Uluru-Kata Tjuta National Park. More than 240 non-native plant species are recorded in the Lake Eyre Basin, with many predicted to spread, but there are insufficient resources to control all species. Lake Eyre Basin experts identified 12 strategies to control, contain or eradicate non-native species over the next 50 years. The total cost of the proposed Lake Eyre Basin strategies was estimated at AU$1·7 billion, an average of AU$34 million annually. Implementation of these strategies is estimated to reduce non-native plant dominance by 17 million ha – there would be a 32% reduction in the likely area dominated by non-native plants within 50 years if these strategies were implemented. The three most cost-effective strategies were controlling Parkinsonia aculeata, Ziziphus mauritiana and Prosopis spp. These three strategies combined were estimated to cost only 0·01% of total cost of all the strategies, but would provide 20% of the total benefits. Over 50 years, cost-effective spending of AU$2·3 million could eradicate all non-native plant species from the only threatened ecological community within the Lake Eyre Basin, the Great Artesian Basin discharge springs. Synthesis and applications. Our framework, based on a case study of the ˜120 million ha Lake Eyre Basin in Australia, provides a rationale for financially efficient investment in non-native plant management and reveals combinations of strategies that are optimal for different budgets. It also highlights knowledge gaps and incidental findings that could improve effective management of non-native plants, for example addressing the reliability of species distribution data and prevalence of information sharing across states and regions.
Resumo:
The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.
Resumo:
Carbon nanostructures (CNs) are amongst the most promising biorecognition nanomaterials due to their unprecedented optical, electrical and structural properties. As such, CNs may be harnessed to tackle the detrimental public health and socio-economic adversities associated with neurodegenerative diseases (NDs). In particular, CNs may be tailored for a specific determination of biomarkers indicative of NDs. However, the realization of such a biosensor represents a significant technological challenge in the uniform fabrication of CNs with outstanding qualities in order to facilitate a highly-sensitive detection of biomarkers suspended in complex biological environments. Notably, the versatility of plasma-based techniques for the synthesis and surface modification of CNs may be embraced to optimize the biorecognition performance and capabilities. This review surveys the recent advances in CN-based biosensors, and highlights the benefits of plasma-processing techniques to enable, enhance, and tailor the performance and optimize the fabrication of CNs, towards the construction of biosensors with unparalleled performance for the early diagnosis of NDs, via a plethora of energy-efficient, environmentally-benign, and inexpensive approaches.
Resumo:
Interest in the area of collaborative Unmanned Aerial Vehicles (UAVs) in a Multi-Agent System is growing to compliment the strengths and weaknesses of the human-machine relationship. To achieve effective management of multiple heterogeneous UAVs, the status model of the agents must be communicated to each other. This paper presents the effects on operator Cognitive Workload (CW), Situation Awareness (SA), trust and performance by increasing the autonomy capability transparency through text-based communication of the UAVs to the human agents. The results revealed a reduction in CW, increase in SA, increase in the Competence, Predictability and Reliability dimensions of trust, and the operator performance.
Resumo:
Bone diseases such as rickets and osteoporosis cause significant reduction in bone quantity and quality, which leads to mechanical abnormalities. However, the precise ultrastructural mechanism by which altered bone quality affects mechanical properties is not clearly understood. Here we demonstrate the functional link between altered bone quality (reduced mineralization) and abnormal fibrillar-level mechanics using a novel, real-time synchrotron X-ray nanomechanical imaging method to study a mouse model with rickets due to reduced extrafibrillar mineralization. A previously unreported N-ethyl-N-nitrosourea (ENU) mouse model for hypophosphatemic rickets (Hpr), as a result of missense Trp314Arg mutation of the phosphate regulating gene with homologies to endopeptidase on the X chromosome (Phex) and with features consistent with X-linked hypophosphatemic rickets (XLHR) in man, was investigated using in situ synchrotron small angle X-ray scattering to measure real-time changes in axial periodicity of the nanoscale mineralized fibrils in bone during tensile loading. These determine nanomechanical parameters including fibril elastic modulus and maximum fibril strain. Mineral content was estimated using backscattered electron imaging. A significant reduction of effective fibril modulus and enhancement of maximum fibril strain was found in Hpr mice. Effective fibril modulus and maximum fibril strain in the elastic region increased consistently with age in Hpr and wild-type mice. However, the mean mineral content was ∼21% lower in Hpr mice and was more heterogeneous in its distribution. Our results are consistent with a nanostructural mechanism in which incompletely mineralized fibrils show greater extensibility and lower stiffness, leading to macroscopic outcomes such as greater bone flexibility. Our study demonstrates the value of in situ X-ray nanomechanical imaging in linking the alterations in bone nanostructure to nanoscale mechanical deterioration in a metabolic bone disease. Copyright
Resumo:
The authors combine nanostenciling and pulsed laser deposition to patterngermanium(Ge)nanostructures into desired architectures. They have analyzed the evolution of the Ge morphology with coverage. Following the formation of a wetting layer within each area defined by the stencil’s apertures, Gegrowth becomes three dimensional and the size and number of Ge nanocrystals evolve with coverage. Micro-Raman spectroscopy shows that the deposits are crystalline and epitaxial. This approach is promising for the parallel patterning of semiconductor nanostructures for optoelectronic applications.
Resumo:
Tridiagonal diagonally dominant linear systems arise in many scientific and engineering applications. The standard Thomas algorithm for solving such systems is inherently serial forming a bottleneck in computation. Algorithms such as cyclic reduction and SPIKE reduce a single large tridiagonal system into multiple small independent systems which can be solved in parallel. We have developed portable cyclic reduction and SPIKE algorithm OpenCL implementations with the intent to target a range of co-processors in a heterogeneous computing environment including Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs) and other multi-core processors. In this paper, we evaluate these designs in the context of solver performance, resource efficiency and numerical accuracy.
Resumo:
Inspired by high porosity, absorbency, wettability and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the mode of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.
Resumo:
- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.
Resumo:
Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.