985 resultados para Cosmological constants
Resumo:
Simplifying the Einstein field equation by assuming the cosmological principle yields a set of differential equations which governs the dynamics of the universe as described in the cosmological standard model. The cosmological principle assumes the space appears the same everywhere and in every direction and moreover, the principle has earned its position as a fundamental assumption in cosmology by being compatible with the observations of the 20th century. It was not until the current century when observations in cosmological scales showed significant deviation from isotropy and homogeneity implying the violation of the principle. Among these observations are the inconsistency between local and non-local Hubble parameter evaluations, baryon acoustic features of the Lyman-α forest and the anomalies of the cosmic microwave background radiation. As a consequence, cosmological models beyond the cosmological principle have been studied vastly; after all, the principle is a hypothesis and as such should frequently be tested as any other assumption in physics. In this thesis, the effects of inhomogeneity and anisotropy, arising as a consequence of discarding the cosmological principle, is investigated. The geometry and matter content of the universe becomes more cumbersome and the resulting effects on the Einstein field equation is introduced. The cosmological standard model and its issues, both fundamental and observational are presented. Particular interest is given to the local Hubble parameter, supernova explosion, baryon acoustic oscillation, and cosmic microwave background observations and the cosmological constant problems. Explored and proposed resolutions emerging by violating the cosmological principle are reviewed. This thesis is concluded by a summary and outlook of the included research papers.
Resumo:
This paper introduces fast algorithms for performing group operations on twisted Edwards curves, pushing the recent speed limits of Elliptic Curve Cryptography (ECC) forward in a wide range of applications. Notably, the new addition algorithm uses for suitably selected curve constants. In comparison, the fastest point addition algorithms for (twisted) Edwards curves stated in the literature use . It is also shown that the new addition algorithm can be implemented with four processors dropping the effective cost to . This implies an effective speed increase by the full factor of 4 over the sequential case. Our results allow faster implementation of elliptic curve scalar multiplication. In addition, the new point addition algorithm can be used to provide a natural protection from side channel attacks based on simple power analysis (SPA).
Resumo:
This paper provides new results about efficient arithmetic on Jacobi quartic form elliptic curves, y 2 = d x 4 + 2 a x 2 + 1. With recent bandwidth-efficient proposals, the arithmetic on Jacobi quartic curves became solidly faster than that of Weierstrass curves. These proposals use up to 7 coordinates to represent a single point. However, fast scalar multiplication algorithms based on windowing techniques, precompute and store several points which require more space than what it takes with 3 coordinates. Also note that some of these proposals require d = 1 for full speed. Unfortunately, elliptic curves having 2-times-a-prime number of points, cannot be written in Jacobi quartic form if d = 1. Even worse the contemporary formulae may fail to output correct coordinates for some inputs. This paper provides improved speeds using fewer coordinates without causing the above mentioned problems. For instance, our proposed point doubling algorithm takes only 2 multiplications, 5 squarings, and no multiplication with curve constants when d is arbitrary and a = ±1/2.
Resumo:
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us to understand the impact of estimation error on the performance of in-sample optimal portfolios. Key Words: minimum-variance frontier; efficiency set constants; finite sample distribution
Resumo:
The biomechanical or biophysical principles can be applied to study biological structures in their modern or fossil form. Bone is an important tissue in paleontological studies as it is a commonly preserved element in most fossil vertebrates, and can often allow its microstructures such as lacuna and canaliculi to be studied in detail. In this context, the principles of Fluid Mechanics and Scaling Laws have been previously applied to enhance the understanding of bone microarchitecture and their implications for the evolution of hydraulic structures to transport fluid. It has been shown that the microstructure of bone has evolved to maintain efficient transport between the nutrient supply and cells, the living components of the tissue. Application of the principle of minimal expenditure of energy to this analysis shows that the path distance comprising five or six lamellar regions represents an effective limit for fluid and solute transport between the nutrient supply and cells; beyond this threshold, hydraulic resistance in the network increases and additional energy expenditure is necessary for further transportation. This suggests an optimization of the size of bone’s building blocks (such as osteon or trabecular thickness) to meet the metabolic demand concomitant to minimal expenditure of energy. This biomechanical aspect of bone microstructure is corroborated from the ratio of osteon to Haversian canal diameters and scaling constants of several mammals considered in this study. This aspect of vertebrate bone microstructure and physiology may provide a basis of understanding of the form and function relationship in both extinct and extant taxa.
Resumo:
In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.
Resumo:
Various piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes. Dimensional adjustments of adaptive polymer films depend on charge deposition and require a detailed understanding of the piezoelectric material responses which are expected to deteriorate owing to strong vacuum UV, � -, X-ray, energetic particles and atomic oxygen exposure. We have investigated the degradation of PVDF and its copolymers under various stress environments detrimental to reliable operation in space. Initial radiation aging studies have shown complex material changes with lowered Curie temperatures, complex material changes with lowered melting points, morphological transformations and significant crosslinking, but little influence on piezoelectric d33 constants. Complex aging processes have also been observed in accelerated temperature environments inducing annealing phenomena and cyclic stresses. The results suggest that poling and chain orientation are negatively affected by radiation and temperature exposure. A framework for dealing with these complex material qualification issues and overall system survivability predictions in low earth orbit conditions has been established. It allows for improved material selection, feedback for manufacturing and processing, material optimization/stabilization strategies and provides guidance on any alternative materials.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.
Resumo:
The indoline dyes D102, D131, D149, and D205 have been characterized when adsorved on fluorine-doped tin oxide (FTO) and TiO2 electrode surfaces. Adsorption from 50:50 acetonitrile - tert-butanol onto flourine-doped tin oxide (FTO) allows approximate Langmuirian binding constants of 6.5 x 10(4), 2.01 x 10(3), 2.0 x 10(4), and 1.5 x 10(4) mol-1 dm3, respectively, to be determined. Voltammetric data obtained in acetonitrile/0.1 M NBu4PF6 indicate reversible on-electron oxidation at Emid = 0.94, 0.91, 0.88, and 0.88 V vs Ag/AgCI(3 M KCI), respectively, with dye aggregation (at high coverage) causing additional peak features at more positive potentials. Slow chemical degradation processes and electron transfer catalysis for iodine oxidation were observed for all four oxidezed indolinium cations. When adsorbed onto TiO2 nanoparticle films (ca. 9nm particle diameter and ca.3/um thickness of FTO0, reversible voltammetric responses with Emid = 1.08, 1.156, 0.92 and 0.95 V vs Ag/AgCI(3 M KCI), respectively, suggest exceptionally fast hole hopping diffusion (with Dapp > 5 x 10(-9) m2 s-1) for adsorbed layers of four indoline dyes, presumably due to pie-pie stacking in surface aggregates. Slow dye degradation is shown to affect charge transport via electron hopping. Spectrelectrochemical data for the adsorbed indoline dyes on FTO-TiO2 revealed a red-shift of absorption peaks after oxidation and the presence of a strong charge transfer band in the near-IR region. The implications of the indoline dye reactivity and fast hole mobility for solar cell devices are discussed.
Resumo:
The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol.2, 117–E (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ- DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.
Resumo:
Discrete stochastic simulations, via techniques such as the Stochastic Simulation Algorithm (SSA) are a powerful tool for understanding the dynamics of chemical kinetics when there are low numbers of certain molecular species. However, an important constraint is the assumption of well-mixedness and homogeneity. In this paper, we show how to use Monte Carlo simulations to estimate an anomalous diffusion parameter that encapsulates the crowdedness of the spatial environment. We then use this parameter to replace the rate constants of bimolecular reactions by a time-dependent power law to produce an SSA valid in cases where anomalous diffusion occurs or the system is not well-mixed (ASSA). Simulations then show that ASSA can successfully predict the temporal dynamics of chemical kinetics in a spatially constrained environment.
Resumo:
Fixed-wing aircraft equipped with downward pointing cameras and/or LiDAR can be used for inspecting approximately piecewise linear assets such as oil-gas pipelines, roads and power-lines. Automatic control of such aircraft is important from a productivity and safety point of view (long periods of precision manual flight at low-altitude is not considered reasonable from a safety perspective). This paper investigates the effect of any unwanted coupling between guidance and autopilot loops (typically caused by unmodeled delays in the aircraft’s response), and the specific impact of any unwanted dynamics on the performance of aircraft undertaking inspection of piecewise linear corridor assets (such as powerlines). Simulation studies and experimental flight tests are used to demonstrate the benefits of a simple compensator in mitigating the unwanted lateral oscillatory behaviour (or coupling) that is caused by unmodeled time constants in the aircraft dynamics.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Modelling the power systems load is a challenge since the load level and composition varies with time. An accurate load model is important because there is a substantial component of load dynamics in the frequency range relevant to system stability. The composition of loads need to be charaterised because the time constants of composite loads affect the damping contributions of the loads to power system oscillations, and their effects vary with the time of the day, depending on the mix of motors loads. This chapter has two main objectives: 1) describe the load modelling in small signal using on-line measurements; and 2) present a new approach to develop models that reflect the load response to large disturbances. Small signal load characterisation based on on-line measurements allows predicting the composition of load with improved accuracy compared with post-mortem or classical load models. Rather than a generic dynamic model for small signal modelling of the load, an explicit induction motor is used so the performance for larger disturbances can be more reliably inferred. The relation between power and frequency/voltage can be explicitly formulated and the contribution of induction motors extracted. One of the main features of this work is the induction motor component can be associated to nominal powers or equivalent motors