7 resultados para thermodynamical observables
em Queensland University of Technology - ePrints Archive
Resumo:
The results of studies on the growth of high-aspect nanostructures in low-temperature non-equilibrium plasmas of reactive gas mixtures with or without hydrogen are presented. The results suggest that the hydrogen in the reactive plasma strongly affects the length of the nanostructures. This phenomenon is explained in terms of selective hydrogen passivation of the lateral and top surfaces of the surface-supported nanostructures. The theoretical model describes the effect of the atomic hydrogen passivation on the nanostructure shape and predicts the critical hydrogen coverage of the lateral surfaces necessary to achieve the nanostructure growth with the pre-determined shape. Our results demonstrate that the use of a strongly non-equilibrium plasma is very effective in significantly improving the shape control of quasi-one-dimensional single-crystalline nanostructures.
Resumo:
In spite of significant research in the development of efficient algorithms for three carrier ambiguity resolution, full performance potential of the additional frequency signals cannot be demonstrated effectively without actual triple frequency data. In addition, all the proposed algorithms showed their difficulties in reliable resolution of the medium-lane and narrow-lane ambiguities in different long-range scenarios. In this contribution, we will investigate the effects of various distance-dependent biases, identifying the tropospheric delay to be the key limitation for long-range three carrier ambiguity resolution. In order to achieve reliable ambiguity resolution in regional networks with the inter-station distances of hundreds of kilometers, a new geometry-free and ionosphere-free model is proposed to fix the integer ambiguities of the medium-lane or narrow-lane observables over just several minutes without distance constraint. Finally, the semi-simulation method is introduced to generate the third frequency signals from dual-frequency GPS data and experimentally demonstrate the research findings of this paper.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
Using American panel data from the National Education Longitudinal Study of 1988, this article investigates the effect of working during grade 12 on attainment.We employ, for the first time in the related literature, a semiparametric propensity score matching approach combined with difference-in-differences. We address selection on both observables and unobservables associated with part-time work decisions, without the need for instrumental variable. Once such factors are controlled for, little to no effects on reading and math scores are found. Overall, our results therefore suggest a negligible academic cost from part-time working by the end of high school.
Resumo:
Strain-based failure criteria have several advantages over stress-based failure criteria: they can account for elastic and inelastic strains, they utilise direct, observables effects instead of inferred effects (strain gauges vs. stress estimates), and model complete stress-strain curves including pre-peak, non-linear elasticity and post-peak strain weakening. In this study, a strain-based failure criterion derived from thermodynamic first principles utilising the concepts of continuum damage mechanics is presented. Furthermore, implementation of this failure criterion into a finite-element simulation is demonstrated and applied to the stability of underground mining coal pillars. In numerical studies, pillar strength is usually expressed in terms of critical stresses or stress-based failure criteria where scaling with pillar width and height is common. Previous publications have employed the finite-element method for pillar stability analysis using stress-based failure criterion such as Mohr-Coulomb and Hoek-Brown or stress-based scalar damage models. A novel constitutive material model, which takes into consideration anisotropy as well as elastic strain and damage as state variables has been developed and is presented in this paper. The damage threshold and its evolution are strain-controlled, and coupling of the state variables is achieved through the damage-induced degradation of the elasticity tensor. This material model is implemented into the finite-element software ABAQUS and can be applied to 3D problems. Initial results show that this new material model is capable of describing the non-linear behaviour of geomaterials commonly observed before peak strength is reached as well as post-peak strain softening. Furthermore, it is demonstrated that the model can account for directional dependency of failure behaviour (i.e. anisotropy) and has the potential to be expanded to environmental controls like temperature or moisture.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
This study provides evidence that after several decades of fighting for equal pay for equal work, an unexplained gender pay gap remains amongst senior executives in ASX-listed firms. After controlling for a large suite of personal, occupational and firm observables, we find female senior executives receive, on average, 22.58 percent less in base salary for the period 2002–2013. When executives are awarded performance-based pay, females receive on average 16.47 percent less in cash bonus and 18.21 percent less in long-term incentives than males. The results are robust to using firm fixed effects and propensity-score matching. Blinder–Oaxaca decomposition results show that the mean pay gap cannot be attributed to gender differences in attributes, including job titles. Instead, the results point to differences in returns on firm-specific variables, in particular firm risk.