917 resultados para Random values
Resumo:
The dynamical discrete web (DyDW), introduced in the recent work of Howitt and Warren, is a system of coalescing simple symmetric one-dimensional random walks which evolve in an extra continuous dynamical time parameter tau. The evolution is by independent updating of the underlying Bernoulli variables indexed by discrete space-time that define the discrete web at any fixed tau. In this paper, we study the existence of exceptional (random) values of tau where the paths of the web do not behave like usual random walks and the Hausdorff dimension of the set of such exceptional tau. Our results are motivated by those about exceptional times for dynamical percolation in high dimension by Haggstrom, Peres and Steif, and in dimension two by Schramm and Steif. The exceptional behavior of the walks in the DyDW is rather different from the situation for the dynamical random walks of Benjamini, Haggstrom, Peres and Steif. For example, we prove that the walk from the origin S(0)(tau) violates the law of the iterated logarithm (LIL) on a set of tau of Hausdorff dimension one. We also discuss how these and other results should extend to the dynamical Brownian web, the natural scaling limit of the DyDW. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
ABSTRACT The traditional method of net present value (NPV) to analyze the economic profitability of an investment (based on a deterministic approach) does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L.) production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV) were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV) such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in Chile.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Artificial pancreas is in the forefront of research towards the automatic insulin infusion for patients with type 1 diabetes. Due to the high inter- and intra-variability of the diabetic population, the need for personalized approaches has been raised. This study presents an adaptive, patient-specific control strategy for glucose regulation based on reinforcement learning and more specifically on the Actor-Critic (AC) learning approach. The control algorithm provides daily updates of the basal rate and insulin-to-carbohydrate (IC) ratio in order to optimize glucose regulation. A method for the automatic and personalized initialization of the control algorithm is designed based on the estimation of the transfer entropy (TE) between insulin and glucose signals. The algorithm has been evaluated in silico in adults, adolescents and children for 10 days. Three scenarios of initialization to i) zero values, ii) random values and iii) TE-based values have been comparatively assessed. The results have shown that when the TE-based initialization is used, the algorithm achieves faster learning with 98%, 90% and 73% in the A+B zones of the Control Variability Grid Analysis for adults, adolescents and children respectively after five days compared to 95%, 78%, 41% for random initialization and 93%, 88%, 41% for zero initial values. Furthermore, in the case of children, the daily Low Blood Glucose Index reduces much faster when the TE-based tuning is applied. The results imply that automatic and personalized tuning based on TE reduces the learning period and improves the overall performance of the AC algorithm.
Resumo:
Given the importance of Guzera breeding programs for milk production in the tropics, the objective of this study was to compare alternative random regression models for estimation of genetic parameters and prediction of breeding values. Test-day milk yields records (TDR) were collected monthly, in a maximum of 10 measurements. The database included 20,524 records of first lactation from 2816 Guzera cows. TDR data were analyzed by random regression models (RRM) considering additive genetic, permanent environmental and residual effects as random and the effects of contemporary group (CG), calving age as a covariate (linear and quadratic effects) and mean lactation curve as fixed. The genetic additive and permanent environmental effects were modeled by RRM using Wilmink, All and Schaeffer and cubic B-spline functions as well as Legendre polynomials. Residual variances were considered as heterogeneous classes, grouped differently according to the model used. Multi-trait analysis using finite-dimensional models (FDM) for testday milk records (TDR) and a single-trait model for 305-days milk yields (default) using the restricted maximum likelihood method were also carried out as further comparisons. Through the statistical criteria adopted, the best RRM was the one that used the cubic B-spline function with five random regression coefficients for the genetic additive and permanent environmental effects. However, the models using the Ali and Schaeffer function or Legendre polynomials with second and fifth order for, respectively, the additive genetic and permanent environmental effects can be adopted, as little variation was observed in the genetic parameter estimates compared to those estimated by models using the B-spline function. Therefore, due to the lower complexity in the (co)variance estimations, the model using Legendre polynomials represented the best option for the genetic evaluation of the Guzera lactation records. An increase of 3.6% in the accuracy of the estimated breeding values was verified when using RRM. The ranks of animals were very close whatever the RRM for the data set used to predict breeding values. Considering P305, results indicated only small to medium difference in the animals' ranking based on breeding values predicted by the conventional model or by RRM. Therefore, the sum of all the RRM-predicted breeding values along the lactation period (RRM305) can be used as a selection criterion for 305-day milk production. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Objective: The aim of this study was to assess the effects of 830 and 670 nm laser on malondialdehyde (MDA) concentration in random skin-flap survival. Background Data: Low-level laser therapy (LLLT) has been reported to be successful in stimulating the formation of new blood vessels and activating superoxide-dismutase delivery, thus helping the inhibition of free-radical action and consequently reducing necrosis. Materials and Methods: Thirty Wistar rats were used and divided into three groups, with 10 rats in each one. A random skin flap was raised on the dorsum of each animal. Group 1 was the control group; group 2 received 830 nm laser radiation; and group 3 was submitted to 670 nm laser radiation. The animals underwent laser therapy with 36 J/cm(2) energy density immediately after surgery and on the 4 days subsequent to surgery. The application site of the laser radiation was 1 point, 2.5 cm from the flap's cranial base. The percentage of the skin-flap necrosis area was calculated 7 days postoperative using the paper-template method, and a skin sample was collected immediately after as a way of determining the MDA concentration. Results: Statistically significant differences were found between the necrosis percentages, with higher values seen in group 1 compared with groups 2 and 3. Groups 2 and 3 did not present statistically significant differences (p > 0.05). Group 3 had a lower concentration of MDA values compared to the control group (p < 0.05). Conclusion: LLLT was effective in increasing the random skin-flap viability in rats, and the 670 nm laser was efficient in reducing the MDA concentration.
Resumo:
Mature weight breeding values were estimated using a multi-trait animal model (MM) and a random regression animal model (RRM). Data consisted of 82 064 weight records from 8 145 animals, recorded from birth to eight years of age. Weights at standard ages were considered in the MM. All models included contemporary groups as fixed effects, and age of dam (linear and quadratic effects) and animal age as covariates. In the RRM, mean trends were modelled through a cubic regression on orthogonal polynomials of animal age and genetic maternal and direct and maternal permanent environmental effects were also included as random. Legendre polynomials of orders 4, 3, 6 and 3 were used for animal and maternal genetic and permanent environmental effects, respectively, considering five classes of residual variances. Mature weight (five years) direct heritability estimates were 0.35 (MM) and 0.38 (RRM). Rank correlation between sires' breeding values estimated by MM and RRM was 0.82. However, selecting the top 2% (12) or 10% (62) of the young sires based on the MM predicted breeding values, respectively 71% and 80% of the same sires would be selected if RRM estimates were used instead. The RRM modelled the changes in the (co) variances with age adequately and larger breeding value accuracies can be expected using this model.
Resumo:
The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Gamma ray tomography experiments have been carried out to detect spatial patterns in the porosity in a 0.27 m diameter column packed with steel Rashig rings of different sizes: 12.6, 37.9, and 76 mm. using a first generation CT system (Chen et al., 1998). A fast Fourier transform tomographic reconstruction algorithm has been used to calculate the spatial variation over the column cross section. Cross-sectional gas porosity and solid holdup distribution were determinate. The values of cross-sectional average gas porosity were epsilon=0.849, 0.938 and 0.966 for the 12.6, 37.9, and 76 mm rings, respectively. Radial holdup variation within the packed bed has been determined. The variation of the circumferentially averaged gas holdup in the radial direction indicates that the porosity in the column wall region is a somewhat higher than that in the bulk region, due to the effect of the column wall. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A new conceptual model for soil pore-solid structure is formalized. Soil pore-solid structure is proposed to comprise spatially abutting elements each with a value which is its membership to the fuzzy set ''pore,'' termed porosity. These values have a range between zero (all solid) and unity (all pore). Images are used to represent structures in which the elements are pixels and the value of each is a porosity. Two-dimensional random fields are generated by allocating each pixel a porosity by independently sampling a statistical distribution. These random fields are reorganized into other pore-solid structural types by selecting parent points which have a specified local region of influence. Pixels of larger or smaller porosity are aggregated about the parent points and within the region of interest by controlled swapping of pixels in the image. This creates local regions of homogeneity within the random field. This is similar to the process known as simulated annealing. The resulting structures are characterized using one-and two-dimensional variograms and functions describing their connectivity. A variety of examples of structures created by the model is presented and compared. Extension to three dimensions presents no theoretical difficulties and is currently under development.
Resumo:
BACKGROUND: Cost effective means of assessing the levels of risk factors in the population have to be defined in order to monitor these factors over time and across populations. This study is aimed at analyzing the difference in population estimates of the mean levels of body mass index (BMI) and the prevalences of overweight, between health examination survey and telephone survey. METHODS: The study compares the results of two health surveys, one by telephone (N=820) and the other by physical examination (N=1318). The two surveys, based on independent random samples of the population, were carried out over the same period (1992-1993) in the same population (canton of Vaud, Switzerland). RESULTS: Overall participation rates were 67% and 53% for the health interview survey (HIS) and the health examination survey (HES) respectively. In the HIS, the reporting rate was over 98% for weight and height values. Self-reported weight was on average lower than measured weight, by 2.2 kg in men and 3.5 kg in women, while self-reported height was on average greater than measured height, by 1.2 cm in men and 1.9 cm in women. As a result, in comparison to HES, HIS led to substantially lower mean levels of BMI, and to a reduction of the prevalence rates of obesity (BMI>30 kg/m(2)) by more than a half. These differences are larger for women than for men. CONCLUSION: The two surveys were based on different sampling procedures. However, this difference in design is unlikely to explain the systematic bias observed between self-reported and measured values for height and weight. This bias entails the overall validity of BMI assessment from telephone surveys.
Resumo:
The genetic variation and population structure of three populations of Anopheles darlingi from Colombia were studied using random amplified polymorphic markers (RAPDs) and amplified fragment length polymorphism markers (AFLPs). Six RAPD primers produced 46 polymorphic fragments, while two AFLP primer combinations produced 197 polymorphic fragments from 71 DNA samples. Both of the evaluated genetic markers showed the presence of gene flow, suggesting that Colombian An. darlingi populations are in panmixia. Average genetic diversity, estimated from observed heterozygosity, was 0.374 (RAPD) and 0.309 (AFLP). RAPD and AFLP markers showed little evidence of geographic separation between eastern and western populations; however, the F ST values showed high gene flow between the two western populations (RAPD: F ST = 0.029; Nm: 8.5; AFLP: F ST = 0.051; Nm: 4.7). According to molecular variance analysis (AMOVA), the genetic distance between populations was significant (RAPD:phiST = 0.084; AFLP:phiST = 0.229, P < 0.001). The F ST distances and AMOVAs using AFLP loci support the differentiation of the Guyana biogeographic province population from those of the Chocó-Magdalena. In this last region, Chocó and Córdoba populations showed the highest genetic flow.
Resumo:
One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram
Resumo:
Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods:We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits).Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting.Conclusions: this incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued