503 resultados para Variances
Resumo:
Normal range for scrotal circumference in Australian beef bulls was established using more than 300,000 measurements of breed, management group, age, liveweight, and scrotal circumference. The data used were derived from Australian bull breeders and two large research projects in northern Australia. Most bulls were within 250 to 750 kg liveweight and 300 to 750 days of age. The differences between breeds and variances within breeds were higher when scrotal circumference was predicted from age rather than liveweight, because of variance in growth rates. The average standard deviation for predicted scrotal circumference from liveweight and age was 25 and 30 mm, respectively. Scrotal circumference by liveweight relationships have a similar pattern across all breeds, except in Waygu, with a 50 to 70 mm range in average scrotal circumference at liveweights between 250 and 750 kg. Temperate breed bulls tended to have higher scrotal circumference at the same liveweight than tropically adapted breeds. Five groupings of common beef breeds in Australian were identified, within which there were similar predictions of scrotal circumference from liveweight. It was concluded that liveweight and breed are required to identify whether scrotal circumference is within normal range for Australian beef bulls that experience a wide range of nutritional conditions.
Resumo:
Elucidating the mechanisms responsible for the patterns of species abundance, diversity, and distribution within and across ecological systems is a fundamental research focus in ecology. Species abundance patterns are shaped in a convoluted way by interplays between inter-/intra-specific interactions, environmental forcing, demographic stochasticity, and dispersal. Comprehensive models and suitable inferential and computational tools for teasing out these different factors are quite limited, even though such tools are critically needed to guide the implementation of management and conservation strategies, the efficacy of which rests on a realistic evaluation of the underlying mechanisms. This is even more so in the prevailing context of concerns over climate change progress and its potential impacts on ecosystems. This thesis utilized the flexible hierarchical Bayesian modelling framework in combination with the computer intensive methods known as Markov chain Monte Carlo, to develop methodologies for identifying and evaluating the factors that control the structure and dynamics of ecological communities. These methodologies were used to analyze data from a range of taxa: macro-moths (Lepidoptera), fish, crustaceans, birds, and rodents. Environmental stochasticity emerged as the most important driver of community dynamics, followed by density dependent regulation; the influence of inter-specific interactions on community-level variances was broadly minor. This thesis contributes to the understanding of the mechanisms underlying the structure and dynamics of ecological communities, by showing directly that environmental fluctuations rather than inter-specific competition dominate the dynamics of several systems. This finding emphasizes the need to better understand how species are affected by the environment and acknowledge species differences in their responses to environmental heterogeneity, if we are to effectively model and predict their dynamics (e.g. for management and conservation purposes). The thesis also proposes a model-based approach to integrating the niche and neutral perspectives on community structure and dynamics, making it possible for the relative importance of each category of factors to be evaluated in light of field data.
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.
Resumo:
We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.
Resumo:
Sepsis is the leading cause of death in intensive care units and results from a deleterious systemic host response to infection. Although initially perceived as potentially deleterious, catalytic antibodies have been proposed to participate in removal of metabolic wastes and protection against infection. Here we show that the presence in plasma of IgG endowed with serine protease-like hydrolytic activity strongly correlates with survival from sepsis. Variances of catalytic rates of IgG were greater in the case of patients with severe sepsis than healthy donors (P < 0.001), indicating that sepsis is associated with alterations in plasma levels of hydrolytic IgG. The catalytic rates of IgG from patients who survived were significantly greater than those of IgG from deceased patients (P < 0.05). The cumulative rate of survival was higher among patients exhibiting high rates of IgG-mediated hydrolysis as compared with patients with low hydrolytic rates (P < 0.05). An inverse correlation was also observed between the markers of severity of disseminated intravascular coagulation and rates of hydrolysis of patients' IgG. Furthermore, IgG from three surviving patients hydrolyzed factor VIII, one of which also hydrolyzed factor IX, suggesting that, in some patients, catalytic IgG may participate in the control of disseminated microvascular thrombosis. Our observations provide the first evidence that hydrolytic antibodies might play a role in recovery from a disease.
Resumo:
A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.
Resumo:
Ramakrishnan A, Chokhandre S, Murthy A. Voluntary control of multisaccade gaze shifts during movement preparation and execution. J Neurophysiol 103: 2400-2416, 2010. First published February 17, 2010; doi: 10.1152/jn.00843.2009. Although the nature of gaze control regulating single saccades is relatively well documented, how such control is implemented to regulate multisaccade gaze shifts is not known. We used highly eccentric targets to elicit multisaccade gaze shifts and tested the ability of subjects to control the saccade sequence by presenting a second target on random trials. Their response allowed us to test the nature of control at many levels: before, during, and between saccades. Although the saccade sequence could be inhibited before it began, we observed clear signs of truncation of the first saccade, which confirmed that it could be inhibited in midflight as well. Using a race model that explains the control of single saccades, we estimated that it took about 100 ms to inhibit a planned saccade but took about 150 ms to inhibit a saccade during its execution. Although the time taken to inhibit was different, the high subject-wise correlation suggests a unitary inhibitory control acting at different levels in the oculomotor system. We also frequently observed responses that consisted of hypometric initial saccades, followed by secondary saccades to the initial target. Given the estimates of the inhibitory process provided by the model that also took into account the variances of the processes as well, the secondary saccades (average latency similar to 215 ms) should have been inhibited. Failure to inhibit the secondary saccade suggests that the intersaccadic interval in a multisaccade response is a ballistic stage. Collectively, these data indicate that the oculomotor system can control a response until a very late stage in its execution. However, if the response consists of multiple movements then the preparation of the second movement becomes refractory to new visual input, either because it is part of a preprogrammed sequence or as a consequence of being a corrective response to a motor error.
Resumo:
The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.
Resumo:
Consider L independent and identically distributed exponential random variables (r.vs) X-1, X-2 ,..., X-L and positive scalars b(1), b(2) ,..., b(L). In this letter, we present the probability density function (pdf), cumulative distribution function and the Laplace transform of the pdf of the composite r.v Z = (Sigma(L)(j=1) X-j)(2) / (Sigma(L)(j=1) b(j)X(j)). We show that the r.v Z appears in various communication systems such as i) maximal ratio combining of signals received over multiple channels with mismatched noise variances, ii)M-ary phase-shift keying with spatial diversity and imperfect channel estimation, and iii) coded multi-carrier code-division multiple access reception affected by an unknown narrow-band interference, and the statistics of the r.v Z derived here enable us to carry out the performance analysis of such systems in closed-form.
Resumo:
In many countries, the prevalence of smoking and smokers average cigarette consumption have decreased, with occasional smoking and daily light smoking (1-4 cigarettes per day, CPD) becoming more common. Despite these changes in smoking patterns, the prevalence of chronic obstructive pulmonary disease (COPD), a disorder characterized by a progressive decline in lung function, continues to rise globally. Smoking is the most important factor causing COPD, however, not all smokers develop the disease. Genetic factors partly explain the inter-individual differences in lung function and susceptibility of some smokers to COPD. No earlier research on the genetic and environmental determinants of lung function or on the phenomenon of light smoking exists in the Finnish population. Further, the association between low-rate smoking patterns and COPD remains partly unknown. This thesis aimed to study the prevalence and consistency of light smoking longitudinally in the Finnish population, to assess the characteristics of light smokers, and to examine the risks of chronic bronchitis and COPD associated with changing smoking patterns over time. A further aim was to estimate longitudinally the proportions of genetic and environmental factors that explain the inter-individual variances in lung function. Data from the Older Finnish Twin Cohort, including same-sex twin pairs born in Finland before 1958, were used. Smoking patterns and chronic bronchitis symptoms were consistently assessed in surveys conducted in 1975, 1981, and 1990. National registry data on reimbursement eligibilities and medication purchases were used to define COPD. Lung function data were obtained from a subsample of the cohort, 217 female twin pairs, who attended spirometry in 2000 and 2003 as part of the Finnish Twin Study on Ageing. The genetic and environmental influences on lung function were estimated by using genetic modeling. This thesis found that light smokers are more often female, well-educated, and exhibit a healthier lifestyle than heavy smokers. At individual level, light smoking is rarely a constant pattern. Light smoking, reducing from heavier smoking to light smoking, and relapsing to light smoking after quitting, are among patterns associated with an increased risk of chronic bronchitis and COPD. Constant light smoking is associated with an increased use of inhaled anticholinergics, a medication for CODP. In addition to smoking, other environmental factors influence lung function in the older age. During a three-year follow-up, new environmental effects influencing spirometry values were observed, whereas the genes affecting lung function remained mostly the same. In conclusion, no safe level of daily smoking exists with regard to pulmonary diseases. Even daily light smoking in middle-age is associated with increased respiratory morbidity later in life. Smoking reduction does not decrease the risk of COPD, and should not be recommended as an alternative to quitting smoking. In elderly people, attention should also be drawn to other factors that can prevent poor lung function.
Resumo:
Stochastic structural systems having a stochastic distribution of material properties and stochastic external loadings in space are analysed when a crack of deterministic size is present. The material properties and external loadings are considered to constitute independent, two-dimensional, univariate, real, homogeneous stochastic fields. The stochastic fields are characterized by their means, variances, autocorrelation functions or the equivalent power spectral density functions, and scale fluctuations. The Young's modulus and Poisson's ratio are treated to be stochastic quantities. The external loading is treated to be a stochastic field in space. The energy release rate is derived using the method of virtual crack extension. The deterministic relationship is derived to represent the sensitivities of energy release rate with respect to both virtual crack extension and real system parameter fluctuations. Taylor series expansion is used and truncation is made to the first order. This leads to the determination of second-order properties of the output quantities to the first order. Using the linear perturbations about the mean values of the output quantities, the statistical information about the energy release rates, SIF and crack opening displacements are obtained. Both plane stress and plane strain cases are considered. The general expressions for the SIF in all the three fracture modes are derived and a more detailed analysis is conducted for a mode I situation. A numerical example is given.
Resumo:
Flexible cantilever pipes conveying fluids with high velocity are analysed for their dynamic response and stability behaviour. The Young's modulus and mass per unit length of the pipe material have a stochastic distribution. The stochastic fields, that model the fluctuations of Young's modulus and mass density are characterized through their respective means, variances and autocorrelation functions or their equivalent power spectral density functions. The stochastic non self-adjoint partial differential equation is solved for the moments of characteristic values, by treating the point fluctuations to be stochastic perturbations. The second-order statistics of vibration frequencies and mode shapes are obtained. The critical flow velocity is-first evaluated using the averaged eigenvalue equation. Through the eigenvalue equation, the statistics of vibration frequencies are transformed to yield critical flow velocity statistics. Expressions for the bounds of eigenvalues are obtained, which in turn yield the corresponding bounds for critical flow velocities.
Resumo:
In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate approach to address the classification problem which is equivalent to machine learning. Implementation results show that the proposed method reduces encoding time to about 20% of the reference implementation with average loss of 0.05 dB in PSNR.
Resumo:
In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.
Resumo:
The fluctuating force model is developed and applied to the turbulent flow of a gas-particle suspension in a channel in the limit of high Stokes number, where the particle relaxation time is large compared to the fluid correlation time, and low particle Reynolds number where the Stokes drag law can be used to describe the interaction between the particles and fluid. In contrast to the Couette flow, the fluid velocity variances in the different directions in the channel are highly non-homogeneous, and they exhibit significant variation across the channel. First, we analyse the fluctuating particle velocity and acceleration distributions at different locations across the channel. The distributions are found to be non-Gaussian near the centre of the channel, and they exhibit significant skewness and flatness. However, acceleration distributions are closer to Gaussian at locations away from the channel centre, especially in regions where the variances of the fluid velocity fluctuations are at a maximum. The time correlations for the fluid velocity fluctuations and particle acceleration fluctuations are evaluated, and it is found that the time correlation of the particle acceleration fluctuations is close to the time correlations of the fluid velocity in a `moving Eulerian' reference, moving with the mean fluid velocity. The variances of the fluctuating force distributions in the Langevin simulations are determined from the time correlations of the fluid velocity fluctuations and the results are compared with direct numerical simulations. Quantitative agreement between the two simulations are obtained provided the particle viscous relaxation time is at least five times larger than the fluid integral time.