981 resultados para Oceanographic computations
Resumo:
Convectively driven downburst winds pose a threat to structures and communities in many regions of Australia not subject to tropical cyclones. Extreme value analysis shows that for return periods of interest to engineering design these events produce higher gust wind speeds than synoptic scale windstorms. Despite this, comparatively little is known of the near ground wind structure of these potentially hazardous windstorms. With this in mind, a series of idealised three-dimensional numerical simulations were undertaken to investigate convective storm wind fields. A dry, non-hydrostatic, sub-cloud model with parameterisation of the microphysics was used. Simulations were run with a uniform 20 m horizontal grid resolution and a variable vertical resolution increasing from 1 m. A systematic grid resolution study showed further refinement did not alter the morphological structure of the outflow. Simulations were performed for stationary downbursts in a quiescent air field, stationary downbursts embedded within environmental boundary layer winds, and also translating downbursts embedded within environmental boundary layer winds.
Resumo:
This contribution is focused on plasma-enhanced chemical vapor deposition systems and their unique features that make them particularly attractive for nanofabrication of flat panel display microemitter arrays based on ordered patterns of single-crystalline carbon nanotip structures. The fundamentals of the plasma-based nanofabrication of carbon nanotips and some other important nanofilms and nanostructures are examined. Specific features, challenges, and potential benefits of using the plasma-based systems for relevant nanofabrication processes are analyzed within the framework of the "plasma-building unit" approach that builds up on extensive experimental data on plasma diagnostics and nanofilm/nanostructure characterization, and numerical simulation of the species composition in the ionized gas phase (multicomponent fluid models), ion dynamics and interaction with ordered carbon nanotip patterns, and ab initio computations of chemical structure of single crystalline carbon nanotips. This generic approach is also applicable for nanoscale assembly of various carbon nanostructures, semiconductor quantum dot structures, and nano-crystalline bioceramics. Special attention is paid to most efficient control strategies of the main plasma-generated building units both in the ionized gas phase and on nanostructured deposition surfaces. The issues of tailoring the reactive plasma environments and development of versatile plasma nanofabrication facilities are also discussed.
Resumo:
We study the natural problem of secure n-party computation (in the passive, computationally unbounded attack model) of the n-product function f G (x 1,...,x n ) = x 1 ·x 2 ⋯ x n in an arbitrary finite group (G,·), where the input of party P i is x i ∈ G for i = 1,...,n. For flexibility, we are interested in protocols for f G which require only black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our results are as follows. First, on the negative side, we show that if (G,·) is non-abelian and n ≥ 4, then no ⌈n/2⌉-private protocol for computing f G exists. Second, on the positive side, we initiate an approach for construction of black-box protocols for f G based on k-of-k threshold secret sharing schemes, which are efficiently implementable over any black-box group G. We reduce the problem of constructing such protocols to a combinatorial colouring problem in planar graphs. We then give two constructions for such graph colourings. Our first colouring construction gives a protocol with optimal collusion resistance t < n/2, but has exponential communication complexity O(n*2t+1^2/t) group elements (this construction easily extends to general adversary structures). Our second probabilistic colouring construction gives a protocol with (close to optimal) collusion resistance t < n/μ for a graph-related constant μ ≤ 2.948, and has efficient communication complexity O(n*t^2) group elements. Furthermore, we believe that our results can be improved by further study of the associated combinatorial problems.
Resumo:
Cells are the fundamental building block of plant based food materials and many of the food processing born structural changes can fundamentally be derived as a function of the deformations of the cellular structure. In food dehydration the bulk level changes in porosity, density and shrinkage can be better explained using cellular level deformations initiated by the moisture removal from the cellular fluid. A novel approach is used in this research to model the cell fluid with Smoothed Particle Hydrodynamics (SPH) and cell walls with Discrete Element Methods (DEM), that are fundamentally known to be robust in treating complex fluid and solid mechanics. High Performance Computing (HPC) is used for the computations due to its computing advantages. Comparing with the deficiencies of the state of the art drying models, the current model is found to be robust in replicating drying mechanics of plant based food materials in microscale.
Resumo:
The close relationship between rain and lightning is well known. However, there are numerous documented observations of heavy rain accompanied by little or no lightning activity (Williams et al, 1992; Jayaratne, 1993). Kuleshov et al (2002) studied thunderstorm distribution and frequency in Australia and concluded that thunderstorm frequency (as expressed by number of thunder-days) in Australia does not, in general, appear to vary in any consistent way with rainfall. However, thunder-days describe occurrence of thunderstorms as heard by an observer, and therefore could be only proxy data to evaluate actual lightning activity (i.e. number of total or cloud-to-ground flashes). Field experiments have demonstrated a strong increase in lightning activity with convective available potential energy (CAPE). It has also been shown that CAPE increases linearly with potential wet bulb temperature, Tw (Williams et al, 1992). In this study, we examine the relationship between lightning ground flash incidence and the two parameters – surface rainfall and surface wet bulb maximum temperature for selected localities around Australia...
Resumo:
Building information models are increasingly being utilised for facility management of large facilities such as critical infrastructures. In such environments, it is valuable to utilise the vast amount of data contained within the building information models to improve access control administration. The use of building information models in access control scenarios can provide 3D visualisation of buildings as well as many other advantages such as automation of essential tasks including path finding, consistency detection, and accessibility verification. However, there is no mathematical model for building information models that can be used to describe and compute these functions. In this paper, we show how graph theory can be utilised as a representation language of building information models and the proposed security related functions. This graph-theoretic representation allows for mathematically representing building information models and performing computations using these functions.
Resumo:
In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM–test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.
Resumo:
This work describes recent extensions to the GPFlow scientific workflow system in development at MQUTeR (www.mquter.qut.edu.au), which facilitate interactive experimentation, automatic lifting of computations from single-case to collection-oriented computation and automatic correlation and synthesis of collections. A GPFlow workflow presents as an acyclic data flow graph, yet provides powerful iteration and collection formation capabilities.
Resumo:
The present study investigated the behavioral and neuropsychological characteristics of decision-making behavior during a gambling task as well as how these characteristics may relate to the Somatic Marker Hypothesis and the Frequency of Gain model. The applicability to intertemporal choice was also discussed. Patterns of card selection during a computerized interpretation of the Iowa Gambling Task were assessed for 10 men and 10 women. Steady State Topography was employed to assess cortical processing throughout this task. Results supported the hypothesis that patterns of card selection were in line with both theories. As hypothesized, these 2 patterns of card selection were also associated with distinct patterns of cortical activity, suggesting that intertemporal choice may involve the recruitment of right dorsolateral prefrontal cortex for somatic labeling, left fusiform gyrus for object representations, and the left dorsolateral prefrontal cortex for an analysis of the associated frequency of gain or loss. It is suggested that processes contributing to intertemporal choice may include inhibition of negatively valenced options, guiding decisions away from those options, as well as computations favoring frequently rewarded options.
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
The purpose of this article is to assess the viability of blanket sustainability policies, such as Building Rating Systems in achieving energy efficiency in university campus buildings. We analyzed the energy consumption trends of 10 LEED-certified buildings and 14 non-LEED certified buildings at a major university in the US. Energy Use Intensity (EUI) of the LEED buildings was significantly higher (EUILEED= 331.20 kBtu/sf/yr) than non-LEED buildings (EUInon-LEED=222.70 kBtu/sf/yr); however, the median EUI values were comparable (EUILEED= 172.64 and EUInon-LEED= 178.16). Because the distributions of EUI values were non-symmetrical in this dataset, both measures can be used for energy comparisons—this was also evident when EUI computations exclude outliers, EUILEED=171.82 and EUInon-LEED=195.41. Additional analyses were conducted to further explore the impact of LEED certification on university campus buildings energy performance. No statistically significant differences were observed between certified and non-certified buildings through a range of robust comparison criteria. These findings were then leveraged to devise strategies to achieve sustainable energy policies for university campus buildings and to identify potential issues with portfolio level building energy performance comparisons.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
A theoretical basis is required for comparing key features and critical elements in wild fisheries and aquaculture supply chains under a changing climate. Here we develop a new quantitative metric that is analogous to indices used to analyse food-webs and identify key species. The Supply Chain Index (SCI) identifies critical elements as those elements with large throughput rates, as well as greater connectivity. The sum of the scores for a supply chain provides a single metric that roughly captures both the resilience and connectedness of a supply chain. Standardised scores can facilitate cross-comparisons both under current conditions as well as under a changing climate. Identification of key elements along the supply chain may assist in informing adaptation strategies to reduce anticipated future risks posed by climate change. The SCI also provides information on the relative stability of different supply chains based on whether there is a fairly even spread in the individual scores of the top few key elements, compared with a more critical dependence on a few key individual supply chain elements. We use as a case study the Australian southern rock lobster Jasus edwardsii fishery, which is challenged by a number of climate change drivers such as impacts on recruitment and growth due to changes in large-scale and local oceanographic features. The SCI identifies airports, processors and Chinese consumers as the key elements in the lobster supply chain that merit attention to enhance stability and potentially enable growth. We also apply the index to an additional four real-world Australian commercial fishery and two aquaculture industry supply chains to highlight the utility of a systematic method for describing supply chains. Overall, our simple methodological approach to empirically-based supply chain research provides an objective method for comparing the resilience of supply chains and highlighting components that may be critical.