58 resultados para Tauberian Constants

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences in the NMR detectability of 39K in various excised rat tissues (liver, brain, kidney, muscle, and testes) have been observed. The lowest NMR detectability occurs for liver (61 ± 3% of potassium as measured by flame photometry) and highest for erythrocytes (100 ± 7%). These differences in detectability correlate with differences in the measured 39K NMR relaxation constants in the same tissues. 39K detectabilities were also found to correlate inversely with the mitochondrial content of the tissues. Mitochondria prepared from liver showed greatly reduced 39K NMR detectability when compared with the tissue from which it was derived, 31.6 ± 9% of potassium measured by flame photometry compared to 61 ± 3%. The detectability of potassium in mitochondria was too low to enable the measurement of relaxation constants. This study indicates that differences in tissue structure, particularly mitochondrial content are important in determining 39K detectability and measured relaxation rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quadrupole coupling constants (qcc) for39K and23Na ions in glycerol have been calculated from linewidths measured as a function of temperature (which in turn results in changes in solution viscosity). The qcc of39K in glycerol is found to be 1.7 MHz, and that of23Na is 1.6 MHz. The relaxation behavior of39K and23Na ions in glycerol shows magnetic field and temperature dependence consistent with the equations for transverse relaxation more commonly used to describe the reorientation of nuclei in a molecular framework with intramolecular field gradients. It is shown, however, that τc is not simply proportional to the ratio of viscosity/temperature (ηT). The 39K qcc in glycerol and the value of 1.3 MHz estimated for this nucleus in aqueous solution are much greater than values of 0.075 to 0.12 MHz calculated from T2 measurements of39K in freshly excised rat tissues. This indicates that, in biological samples, processes such as exchange of potassium between intracellular compartments or diffusion of ions through locally ordered regions play a significant role in determining the effective quadrupole coupling constant and correlation time governing39K relaxation. T1 and T2 measurements of rat muscle at two magnetic fields also indicate that a more complex correlation function may be required to describe the relaxation of39K in tissue. Similar results and conclusions are found for23Na.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface effect on the four independent elastic constants of nanohoneycombs is investigated in this paper. The axial deformation of the horizontal cell wall is included, comparing to the Gibson's method, and the contributions of the two components of surface stress (i.e. surface residual stress and surface elasticity) are discussed. The result shows that the regular hexagonal honeycomb is not isotropic but orthotropic. An increase in the cell-wall thickness t leads to an increase in the discrepancy of the Young's moduli in both directions. Furthermore, the surface residual stress dominates the surface effect on the elastic constants when t < 15 nm (or the relative density <0.17), which is in contrast to that the surface elasticity does when t > 15 nm (or the relative density > 0.17) for metal Al. The present structure and theory may be useful in the design of future nanodevices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces fast algorithms for performing group operations on twisted Edwards curves, pushing the recent speed limits of Elliptic Curve Cryptography (ECC) forward in a wide range of applications. Notably, the new addition algorithm uses for suitably selected curve constants. In comparison, the fastest point addition algorithms for (twisted) Edwards curves stated in the literature use . It is also shown that the new addition algorithm can be implemented with four processors dropping the effective cost to . This implies an effective speed increase by the full factor of 4 over the sequential case. Our results allow faster implementation of elliptic curve scalar multiplication. In addition, the new point addition algorithm can be used to provide a natural protection from side channel attacks based on simple power analysis (SPA).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides new results about efficient arithmetic on Jacobi quartic form elliptic curves, y 2 = d x 4 + 2 a x 2 + 1. With recent bandwidth-efficient proposals, the arithmetic on Jacobi quartic curves became solidly faster than that of Weierstrass curves. These proposals use up to 7 coordinates to represent a single point. However, fast scalar multiplication algorithms based on windowing techniques, precompute and store several points which require more space than what it takes with 3 coordinates. Also note that some of these proposals require d = 1 for full speed. Unfortunately, elliptic curves having 2-times-a-prime number of points, cannot be written in Jacobi quartic form if d = 1. Even worse the contemporary formulae may fail to output correct coordinates for some inputs. This paper provides improved speeds using fewer coordinates without causing the above mentioned problems. For instance, our proposed point doubling algorithm takes only 2 multiplications, 5 squarings, and no multiplication with curve constants when d is arbitrary and a = ±1/2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us to understand the impact of estimation error on the performance of in-sample optimal portfolios. Key Words: minimum-variance frontier; efficiency set constants; finite sample distribution

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The biomechanical or biophysical principles can be applied to study biological structures in their modern or fossil form. Bone is an important tissue in paleontological studies as it is a commonly preserved element in most fossil vertebrates, and can often allow its microstructures such as lacuna and canaliculi to be studied in detail. In this context, the principles of Fluid Mechanics and Scaling Laws have been previously applied to enhance the understanding of bone microarchitecture and their implications for the evolution of hydraulic structures to transport fluid. It has been shown that the microstructure of bone has evolved to maintain efficient transport between the nutrient supply and cells, the living components of the tissue. Application of the principle of minimal expenditure of energy to this analysis shows that the path distance comprising five or six lamellar regions represents an effective limit for fluid and solute transport between the nutrient supply and cells; beyond this threshold, hydraulic resistance in the network increases and additional energy expenditure is necessary for further transportation. This suggests an optimization of the size of bone’s building blocks (such as osteon or trabecular thickness) to meet the metabolic demand concomitant to minimal expenditure of energy. This biomechanical aspect of bone microstructure is corroborated from the ratio of osteon to Haversian canal diameters and scaling constants of several mammals considered in this study. This aspect of vertebrate bone microstructure and physiology may provide a basis of understanding of the form and function relationship in both extinct and extant taxa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes. Dimensional adjustments of adaptive polymer films depend on charge deposition and require a detailed understanding of the piezoelectric material responses which are expected to deteriorate owing to strong vacuum UV, � -, X-ray, energetic particles and atomic oxygen exposure. We have investigated the degradation of PVDF and its copolymers under various stress environments detrimental to reliable operation in space. Initial radiation aging studies have shown complex material changes with lowered Curie temperatures, complex material changes with lowered melting points, morphological transformations and significant crosslinking, but little influence on piezoelectric d33 constants. Complex aging processes have also been observed in accelerated temperature environments inducing annealing phenomena and cyclic stresses. The results suggest that poling and chain orientation are negatively affected by radiation and temperature exposure. A framework for dealing with these complex material qualification issues and overall system survivability predictions in low earth orbit conditions has been established. It allows for improved material selection, feedback for manufacturing and processing, material optimization/stabilization strategies and provides guidance on any alternative materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The indoline dyes D102, D131, D149, and D205 have been characterized when adsorved on fluorine-doped tin oxide (FTO) and TiO2 electrode surfaces. Adsorption from 50:50 acetonitrile - tert-butanol onto flourine-doped tin oxide (FTO) allows approximate Langmuirian binding constants of 6.5 x 10(4), 2.01 x 10(3), 2.0 x 10(4), and 1.5 x 10(4) mol-1 dm3, respectively, to be determined. Voltammetric data obtained in acetonitrile/0.1 M NBu4PF6 indicate reversible on-electron oxidation at Emid = 0.94, 0.91, 0.88, and 0.88 V vs Ag/AgCI(3 M KCI), respectively, with dye aggregation (at high coverage) causing additional peak features at more positive potentials. Slow chemical degradation processes and electron transfer catalysis for iodine oxidation were observed for all four oxidezed indolinium cations. When adsorbed onto TiO2 nanoparticle films (ca. 9nm particle diameter and ca.3/um thickness of FTO0, reversible voltammetric responses with Emid = 1.08, 1.156, 0.92 and 0.95 V vs Ag/AgCI(3 M KCI), respectively, suggest exceptionally fast hole hopping diffusion (with Dapp > 5 x 10(-9) m2 s-1) for adsorbed layers of four indoline dyes, presumably due to pie-pie stacking in surface aggregates. Slow dye degradation is shown to affect charge transport via electron hopping. Spectrelectrochemical data for the adsorbed indoline dyes on FTO-TiO2 revealed a red-shift of absorption peaks after oxidation and the presence of a strong charge transfer band in the near-IR region. The implications of the indoline dye reactivity and fast hole mobility for solar cell devices are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol.2, 117–E (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ- DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discrete stochastic simulations, via techniques such as the Stochastic Simulation Algorithm (SSA) are a powerful tool for understanding the dynamics of chemical kinetics when there are low numbers of certain molecular species. However, an important constraint is the assumption of well-mixedness and homogeneity. In this paper, we show how to use Monte Carlo simulations to estimate an anomalous diffusion parameter that encapsulates the crowdedness of the spatial environment. We then use this parameter to replace the rate constants of bimolecular reactions by a time-dependent power law to produce an SSA valid in cases where anomalous diffusion occurs or the system is not well-mixed (ASSA). Simulations then show that ASSA can successfully predict the temporal dynamics of chemical kinetics in a spatially constrained environment.