952 resultados para Square Root Model
Resumo:
We present results on characterization of lasers with ultra-long cavity lengths up to 84km, the longest cavity ever reported. We have analyzed the mode structure, shape and width of the generated spectra, intensity fluctuations depending on length and intra-cavity power. The RF spectra exhibit an ultra-dense cavity mode structure (mode spacing is 1.2kHz for 84km), in which the width of the mode beating is proportional to the intra-cavity power while the optical spectra broaden with power according to the square-root law acquiring a specific shape with exponential wings. A model based on wave turbulence formalism has been developed to describe the observed effects.
Resumo:
The internal optics of the recent models of the Shin-Nippon SRW-5000 autorefractor (also marketed as the Grand Seiko WV-500) have been modified by the manufacturer so that the infrared measurement ring has been replaced by pairs of horizontal and vertical infrared bars, on either side of fixation. The binocular, open field-of-view, allowing the accommodative state to be objectively monitored while a natural environment is viewed, has made the SRW-5000 a valuable tool in further understanding the nature of the oculomotor response. It is shown that the root-mean-square of model eye measures was least (0.017 ± 0.002D) when the separation of the horizontal measurement bars were averaged twice. The separation of the horizontal bars changes by 3.59 pixels/dioptre (r2 = 0.99), allowing continuous on-line analysis of the refractive state at up to 60 Hz temporal resolution to an accuracy of <0.001D, with pupils >3 mm. The pupil edge is not obscured in the diagonal axis by the measurement bars, unlike the ring of the original optics, so in the newer model pupil size can be measured simultaneously at the same rate with a resolution of <0.001 mm. The measurements of accommodation and pupil size are relatively unaffected by eccentricity of viewing up to ±10° from the visual axis and instrument focusing inaccuracies over a range of 10 mm towards the eye and 5 mm away from the eye. The resolution and temporal properties of the analysis are therefore ideal for the simultaneous measurement of dynamic accommodation and pupil responses. © 2004 The College of Optometrists.
Resumo:
We present results on characterization of lasers with ultra-long cavity lengths up to 84km, the longest cavity ever reported. We have analyzed the mode structure, shape and width of the generated spectra, intensity fluctuations depending on length and intra-cavity power. The RF spectra exhibit an ultra-dense cavity mode structure (mode spacing is 1.2kHz for 84km), in which the width of the mode beating is proportional to the intra-cavity power while the optical spectra broaden with power according to the square-root law acquiring a specific shape with exponential wings. A model based on wave turbulence formalism has been developed to describe the observed effects.
Resumo:
Purpose: The aim of this work was to optimize biodegradable polyester poly(glycerol adipate-co-ω-pentadecalactone), PGA-co-PDL, microparticles as sustained release (SR) carriers for pulmonary drug delivery. Methods: Microparticles were produced by spray drying directly from double emulsion with and without dispersibility enhancers (L-arginine and L-leucine) (0.5-1.5%w/w) using sodium fluorescein (SF) as a model hydrophilic drug. Results: Spray-dried microparticles without dispersibility enhancers exhibited aggregated powders leading to low fine particle fraction (%FPF) (28.79±3.24), fine particle dose (FPD) (14.42±1.57 μg), with a mass median aerodynamic diameter (MMAD) 2.86±0.24 μm. However, L-leucine was significantly superior in enhancing the aerosolization performance ( L-arginine:%FPF 27.61±4.49-26.57±1.85; FPD 12.40±0.99-19.54±0.16 μg and MMAD 2.18±0.35-2. 98±0.25 μm, L-leucine:%FPF 36.90±3.6-43.38±5. 6; FPD 18.66±2.90-21.58±2.46 μg and MMAD 2.55±0.03-3. 68±0.12 μm). Incorporating L-leucine (1.5%w/w) reduced the burst release (24.04±3.87%) of SF compared to unmodified formulations (41.87±2.46%), with both undergoing a square root of time (Higuchi's pattern) dependent release. Comparing the toxicity profiles of PGA-co-PDL with L-leucine (1.5%w/w) (5 mg/ml) and poly(lactide-co-glycolide), (5 mg/ml) spray-dried microparticles in human bronchial epithelial 16HBE14o-cell lines, resulted in cell viability of 85.57±5.44 and 60.66±6.75%, respectively, after 72 h treatment. Conclusion:The above data suggest that PGA-co-PDL may be a useful polymer for preparing SR microparticle carriers, together with dispersibility enhancers, for pulmonary delivery. © Springer Science+Business Media, LLC 2011.
Resumo:
Light transmission was measured through intact, submerged periphyton communities on artificial seagrass leaves. The periphyton communities were representative of the communities on Thalassia testudinum in subtropical seagrass meadows. The periphyton communities sampled were adhered carbonate sediment, coralline algae, and mixed algal assemblages. Crustose or film-forming periphyton assemblages were best prepared for light transmission measurements using artificial leaves fouled on both sides, while measurements through three-dimensional filamentous algae required the periphyton to be removed from one side. For one-sided samples, light transmission could be measured as the difference between fouled and reference artificial leaf samples. For two-sided samples, the percent periphyton light transmission to the leaf surface was calculated as the square root of the fraction of incident light. Linear, exponential, and hyperbolic equations were evaluated as descriptors of the periphyton dry weight versus light transmission relationship. Hyperbolic and exponential decay models were superior to linear models and exhibited the best fits for the observed relationships. Differences between the coefficients of determination (r2) of hyperbolic and exponential decay models were statistically insignificant. Constraining these models for 100% light transmission at zero periphyton load did not result in any statistically significant loss in the explanatory capability of the models. In most all cases, increasing model complexity using three-parameter models rather than two-parameter models did not significantly increase the amount of variation explained. Constrained two-parameter hyperbolic or exponential decay models were judged best for describing the periphyton dry weight versus light transmission relationship. On T. testudinum in Florida Bay and the Florida Keys, significant differences were not observed in the light transmission characteristics of the varying periphyton communities at different study sites. Using pooled data from the study sites, the hyperbolic decay coefficient for periphyton light transmission was estimated to be 4.36 mg dry wt. cm−2. For exponential models, the exponential decay coefficient was estimated to be 0.16 cm2 mg dry wt.−1.
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter values larger than their counterparts in the psychometric function, (c) stimulus placement at the prior mean, and (d) estimates defined as the posterior mean. Unbiasedness arises in only 10 trials, and 20 trials ensure constant standard errors. The standard error of the estimates equals 0.617 times the inverse of the square root of the number of trials. Other variants yielded bias and larger standard errors.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Densification is a key to greater throughput in cellular networks. The full potential of coordinated multipoint (CoMP) can be realized by massive multiple-input multiple-output (MIMO) systems, where each base station (BS) has very many antennas. However, the improved throughput comes at the price of more infrastructure; hardware cost and circuit power consumption scale linearly/affinely with the number of antennas. In this paper, we show that one can make the circuit power increase with only the square root of the number of antennas by circuit-aware system design. To this end, we derive achievable user rates for a system model with hardware imperfections and show how the level of imperfections can be gradually increased while maintaining high throughput. The connection between this scaling law and the circuit power consumption is established for different circuits at the BS.
Resumo:
The basic reproduction number is a key parameter in mathematical modelling of transmissible diseases. From the stability analysis of the disease free equilibrium, by applying Routh-Hurwitz criteria, a threshold is obtained, which is called the basic reproduction number. However, the application of spectral radius theory on the next generation matrix provides a different expression for the basic reproduction number, that is, the square root of the previously found formula. If the spectral radius of the next generation matrix is defined as the geometric mean of partial reproduction numbers, however the product of these partial numbers is the basic reproduction number, then both methods provide the same expression. In order to show this statement, dengue transmission modelling incorporating or not the transovarian transmission is considered as a case study. Also tuberculosis transmission and sexually transmitted infection modellings are taken as further examples.
Resumo:
American tegumentary leishmaniasis (ATL) is a disease transmitted to humans by the female sandflies of the genus Lutzomyia. Several factors are involved in the disease transmission cycle. In this work only rainfall and deforestation were considered to assess the variability in the incidence of ATL. In order to reach this goal, monthly recorded data of the incidence of ATL in Orán, Salta, Argentina, were used, in the period 1985-2007. The square root of the relative incidence of ATL and the corresponding variance were formulated as time series, and these data were smoothed by moving averages of 12 and 24 months, respectively. The same procedure was applied to the rainfall data. Typical months, which are April, August, and December, were found and allowed us to describe the dynamical behavior of ATL outbreaks. These results were tested at 95% confidence level. We concluded that the variability of rainfall would not be enough to justify the epidemic outbreaks of ATL in the period 1997-2000, but it consistently explains the situation observed in the years 2002 and 2004. Deforestation activities occurred in this region could explain epidemic peaks observed in both years and also during the entire time of observation except in 2005-2007.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
The objective of the present study was to estimate (co)variance components for length of productive life (LPL) and some alternative reproductive traits of 6-year-old Nellore cattle. The data set contained 57,410 records for age at first calving from Nellore females and was edited to remove animal records with uncertain paternity and cows with just one piece of calving information. Only animals with age at first calving ranging from 23 to 48 months and calving intervals between 11 and 24 months were kept for analysis. LPL and life production ( LP) were used to describe productive life. LPL was defined as the number of months a cow was kept in the herd until she was 6 years old, given that she was alive at first calving and LP was defined as total number of calves in that time. Four traits were used to describe reproductive traits: two breeding efficiencies on original scale were estimated using Wilcox and Tomar functions (BEW and BET, respectively), and two breeding efficiencies transformed (ASBEW and ASBET, respectively), using the function [arcsine (square root (BEi/100))]. Estimates of heritability for measures of LPL and LP were low and ranged from 0.04 to 0.05. Estimates of heritability for breeding efficiencies on original and transformed scales oscillated from 0.18 to 0.32. Estimates of genetic correlations ranged from -0.57 to 0.79 for LPL and other traits and from 0.28 to 0.63 for LP and other traits.
Resumo:
Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 mu m). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B(50) and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Numerical experiments using a finite difference method were carried out to determine the motion of axisymmetric Taylor vortices for narrow-gap Taylor vortex flow. When a pressure gradient is imposed on the flow the vortices are observed to move with an axial speed of 1.16 +/- 0.005 times the mean axial flow velocity. The method of Brenner was used to calculate the long-time axial spread of material in the flow. For flows where there is no pressure gradient, the axial dispersion scales with the square root of the molecular diffusion, in agreement with the results of Rosen-bluth et al. for high Peclet number dispersion in spatially periodic flows with a roll structure. When a pressure gradient is imposed the dispersion increases by an amount approximately equal to 6.5 x 10(-4) (W) over bar(2)d(2)/D-m, where (W) over bar is the average axial velocity in the annulus, analogous to Taylor dispersion for laminar flow in an empty tube.