72 resultados para Oceanographic computations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM–test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work describes recent extensions to the GPFlow scientific workflow system in development at MQUTeR (www.mquter.qut.edu.au), which facilitate interactive experimentation, automatic lifting of computations from single-case to collection-oriented computation and automatic correlation and synthesis of collections. A GPFlow workflow presents as an acyclic data flow graph, yet provides powerful iteration and collection formation capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study investigated the behavioral and neuropsychological characteristics of decision-making behavior during a gambling task as well as how these characteristics may relate to the Somatic Marker Hypothesis and the Frequency of Gain model. The applicability to intertemporal choice was also discussed. Patterns of card selection during a computerized interpretation of the Iowa Gambling Task were assessed for 10 men and 10 women. Steady State Topography was employed to assess cortical processing throughout this task. Results supported the hypothesis that patterns of card selection were in line with both theories. As hypothesized, these 2 patterns of card selection were also associated with distinct patterns of cortical activity, suggesting that intertemporal choice may involve the recruitment of right dorsolateral prefrontal cortex for somatic labeling, left fusiform gyrus for object representations, and the left dorsolateral prefrontal cortex for an analysis of the associated frequency of gain or loss. It is suggested that processes contributing to intertemporal choice may include inhibition of negatively valenced options, guiding decisions away from those options, as well as computations favoring frequently rewarded options.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this article is to assess the viability of blanket sustainability policies, such as Building Rating Systems in achieving energy efficiency in university campus buildings. We analyzed the energy consumption trends of 10 LEED-certified buildings and 14 non-LEED certified buildings at a major university in the US. Energy Use Intensity (EUI) of the LEED buildings was significantly higher (EUILEED= 331.20 kBtu/sf/yr) than non-LEED buildings (EUInon-LEED=222.70 kBtu/sf/yr); however, the median EUI values were comparable (EUILEED= 172.64 and EUInon-LEED= 178.16). Because the distributions of EUI values were non-symmetrical in this dataset, both measures can be used for energy comparisons—this was also evident when EUI computations exclude outliers, EUILEED=171.82 and EUInon-LEED=195.41. Additional analyses were conducted to further explore the impact of LEED certification on university campus buildings energy performance. No statistically significant differences were observed between certified and non-certified buildings through a range of robust comparison criteria. These findings were then leveraged to devise strategies to achieve sustainable energy policies for university campus buildings and to identify potential issues with portfolio level building energy performance comparisons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A theoretical basis is required for comparing key features and critical elements in wild fisheries and aquaculture supply chains under a changing climate. Here we develop a new quantitative metric that is analogous to indices used to analyse food-webs and identify key species. The Supply Chain Index (SCI) identifies critical elements as those elements with large throughput rates, as well as greater connectivity. The sum of the scores for a supply chain provides a single metric that roughly captures both the resilience and connectedness of a supply chain. Standardised scores can facilitate cross-comparisons both under current conditions as well as under a changing climate. Identification of key elements along the supply chain may assist in informing adaptation strategies to reduce anticipated future risks posed by climate change. The SCI also provides information on the relative stability of different supply chains based on whether there is a fairly even spread in the individual scores of the top few key elements, compared with a more critical dependence on a few key individual supply chain elements. We use as a case study the Australian southern rock lobster Jasus edwardsii fishery, which is challenged by a number of climate change drivers such as impacts on recruitment and growth due to changes in large-scale and local oceanographic features. The SCI identifies airports, processors and Chinese consumers as the key elements in the lobster supply chain that merit attention to enhance stability and potentially enable growth. We also apply the index to an additional four real-world Australian commercial fishery and two aquaculture industry supply chains to highlight the utility of a systematic method for describing supply chains. Overall, our simple methodological approach to empirically-based supply chain research provides an objective method for comparing the resilience of supply chains and highlighting components that may be critical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Layered graphitic materials exhibit new intriguing electronic structure and the search for new types of two-dimensional (2D) monolayer is of importance for the fabrication of next generation miniature electronic and optoelectronic devices. By means of density functional theory (DFT) computations, we investigated in detail the structural, electronic, mechanical and optical properties of the single-layer bismuth iodide (BiI3) nanosheet. Monolayer BiI3 is dynamically stable as confirmed by the computed phonon spectrum. The cleavage energy (Ecl) and interlayer coupling strength of bulk BiI3 are comparable to the experimental values of graphite, which indicates that the exfoliation of BiI3 is highly feasible. The obtained stress-strain curve shows that the BiI3 nanosheet is a brittle material with a breaking strain of 13%. The BiI3 monolayer has an indirect band gap of 1.57 eV with spin orbit coupling (SOC), indicating its potential application for solar cells. Furthermore, the band gap of BiI3 monolayer can be modulated by biaxial strain. Most interestingly, interfacing electrically active graphene with monolayer BiI3 nanosheet leads to enhanced light absorption compared to that in pure monolayer BiI3 nanosheet, highlighting its great potential applications in photonics and photovoltaic solar cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"This chapter discusses laminar and turbulent natural convection in rectangular cavities. Natural convection in rectangular two-dimensional cavities has become a standard problem in numerical heat transfer because of its relevance in understanding a number of problems in engineering. Current research identified a number of difficulties with regard to the numerical methods and the turbulence modeling for this class of flows. Obtaining numerical predictions at high Rayleigh numbers proved computationally expensive such that results beyond Ra ∼ 1014 are rarely reported. The chapter discusses a study in which it was found that turbulent computations in square cavities can't be extended beyond Ra ∼ O (1012) despite having developed a code that proved very efficient for the high Ra laminar regime. As the Rayleigh number increased, thin boundary layers began to form next to the vertical walls, and the central region became progressively more stagnant and highly stratified. Results obtained for the high Ra laminar regime were in good agreement with existing studies. Turbulence computations, although of a preliminary nature, indicated that a second moment closure model was capable of predicting the experimentally observed flow features."--Publisher Summary

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a very recent study [1] the Renormalisation Group (RNG) turbulence model was used to obtain flow predictions in a strongly swirling quarl burner, and was found to perform well in predicting certain features that are not well captured using less sophisticated models of turbulence. The implication is that the RNG approach should provide an economical and reliable tool for the prediction of swirling flows in combustor and furnace geometries commonly encountered in technological applications. To test this hypothesis the present work considers flow in a model furnace for which experimental data is available [2]. The essential features of the flow which differentiate it from the previous study [1] are that the annular air jet entry is relatively narrow and the base wall of the cylindrical furnace is at 90 degrees to the inlet pipe. For swirl numbers of order 1 the resulting flow is highly complex with significant inner and outer recirculation regions. The RNG and standard k-epsilon models are used to model the flow for both swirling and non-swirling entry jets and the results compared with experimental data [2]. Near wall viscous effects are accounted for in both models via the standard wall function formulation [3]. For the RNG model, additional computations with grid placement extending well inside the near wall viscous-affected sublayer are performed in order to assess the low Reynolds number capabilities of the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A computational model for isothermal axisymmetric turbulent flow in a quarl burner is set up using the CFD package FLUENT, and numerical solutions obtained from the model are compared with available experimental data. A standard k-e model and and two versions of the RNG k-e model are used to model the turbulence. One of the aims of the computational study is to investigate whether the RNG based k-e turbulence models are capable of yielding improved flow predictions compared with the standard k-e turbulence model. A difficulty is that the flow considered here features a confined vortex breakdown which can be highly sensitive to flow behaviour both upstream and downstream of the breakdown zone. Nevertheless, the relatively simple confining geometry allows us to undertake a systematic study so that both grid-independent and domain-independent results can be reported. The systematic study includes a detailed investigation of the effects of upstream and downstream conditions on the predictions, in addition to grid refinement and other tests to ensure that numerical error is not significant. Another important aim is to determine to what extent the turbulence model predictions can provide us with new insights into the physics of confined vortex breakdown flows. To this end, the computations are discussed in detail with reference to known vortex breakdown phenomena and existing theories. A major conclusion is that one of the RNG k-e models investigated here is able to correctly capture the complex forward flow region inside the recirculating breakdown zone. This apparently pathological result is in stark contrast to the findings of previous studies, most of which have concluded that either algebraic or differential Reynolds stress modelling is needed to correctly predict the observed flow features. Arguments are given as to why an isotropic eddy-viscosity turbulence model may well be able to capture the complex flow structure within the recirculating zone for this flow setup. With regard to the flow physics, a major finding is that the results obtained here are more consistent with the view that confined vortex breakdown is a type of axisymmetric boundary layer separation, rather than a manifestation of a subcritical flow state.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Autonomous underwater vehicles (AUVs) are becoming commonplace in the study of inshore coastal marine habitats. Combined with shipboard systems, scientists are able to make in-situ measurements of water column and benthic properties. In CSIRO, autonomous gliders are used to collect water column data, while surface vessels are used to collect bathymetry information through the use of swath mapping, bottom grabs, and towed video systems. Although these methods have provided good data coverage for coastal and deep waters beyond 50m, there has been an increasing need for autonomous in-situ sampling in waters less than 50m deep. In addition, the collection of benthic and water column data has been conducted separately, requiring extensive post-processing to combine data streams. As such, a new AUV was developed for in-situ observations of both benthic habitat and water column properties in shallow waters. This paper provides an overview of the Starbug X AUV system, its operational characteristics including vision-based navigation and oceanographic sensor integration.