8 resultados para random number generator

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the nonequilibrium roughening transition of a one-dimensional restricted solid-on-solid model by directly sampling the stationary probability density of a suitable order parameter as the surface adsorption rate varies. The shapes of the probability density histograms suggest a typical Ginzburg-Landau scenario for the phase transition of the model, and estimates of the "magnetic" exponent seem to confirm its mean-field critical behavior. We also found that the flipping times between the metastable phases of the model scale exponentially with the system size, signaling the breaking of ergodicity in the thermodynamic limit. Incidentally, we discovered that a closely related model not considered before also displays a phase transition with the same critical behavior as the original model. Our results support the usefulness of off-critical histogram techniques in the investigation of nonequilibrium phase transitions. We also briefly discuss in the appendix a good and simple pseudo-random number generator used in our simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In epidemiology, the basic reproduction number R-0 is usually defined as the average number of new infections caused by a single infective individual introduced into a completely susceptible population. According to this definition. R-0 is related to the initial stage of the spreading of a contagious disease. However, from epidemiological models based on ordinary differential equations (ODE), R-0 is commonly derived from a linear stability analysis and interpreted as a bifurcation parameter: typically, when R-0 >1, the contagious disease tends to persist in the population because the endemic stationary solution is asymptotically stable: when R-0 <1, the corresponding pathogen tends to naturally disappear because the disease-free stationary solution is asymptotically stable. Here we intend to answer the following question: Do these two different approaches for calculating R-0 give the same numerical values? In other words, is the number of secondary infections caused by a unique sick individual equal to the threshold obtained from stability analysis of steady states of ODE? For finding the answer, we use a susceptibleinfective-recovered (SIR) model described in terms of ODE and also in terms of a probabilistic cellular automaton (PCA), where each individual (corresponding to a cell of the PCA lattice) is connected to others by a random network favoring local contacts. The values of R-0 obtained from both approaches are compared, showing good agreement. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a theoretical model developed for estimating the power, the optical signal to noise ratio and the number of generated carriers in a comb generator, having as a reference the minimum optical signal do noise ratio at the receiver input, for a given fiber link. Based on the recirculating frequency shifting technique, the generator relies on the use of coherent and orthogonal multi-carriers (Coherent-WDM) that makes use of a single laser source (seed) for feeding high capacity (above 100 Gb/s) systems. The theoretical model has been validated by an experimental demonstration, where 23 comb lines with an optical signal to noise ratio ranging from 25 to 33 dB, in a spectral window of similar to 3.5 nm, are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This study aimed to investigate the effect of 830 and 670 nm diode laser on the viability of random skin flaps in rats. Background data: Low-level laser therapy (LLLT) has been reported to be successful in stimulating the formation of new blood vessels and reducing the inflammatory process after injury. However, the efficiency of such treatment remains uncertain, and there is also some controversy regarding the efficacy of different wavelengths currently on the market. Materials and methods: Thirty Wistar rats were used and divided into three groups, with 10 rats in each. A random skin flap was raised on the dorsum of each animal. Group 1 was the control group, group 2 received 830 nm laser radiations, and group 3 was submitted to 670 nm laser radiation (power density = 0.5 mW/cm(2)). The animals underwent laser therapy with 36 J/cm(2) energy density (total energy = 2.52 J and 72 sec per session) immediately after surgery and on the 4 subsequent days. The application site of laser radiation was one point at 2.5 cm from the flap's cranial base. The percentage of skin flap necrosis area was calculated on the 7th postoperative day using the paper template method. A skin sample was collected immediately after to determine the vascular endothelial growth factor (VEGF) expression and the epidermal cell proliferation index (KiD67). Results: Statistically significant differences were found among the percentages of necrosis, with higher values observed in group 1 compared with groups 2 and 3. No statistically significant differences were found among these groups using the paper template method. Group 3 presented the highest mean number of blood vessels expressing VEGF and of cells in the proliferative phase when compared with groups 1 and 2. Conclusions: LLLT was effective in increasing random skin flap viability in rats. The 670 nm laser presented more satisfactory results than the 830 nm laser.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurement called accessibility has been proposed as a means to quantify the efficiency of the communication between nodes in complex networks. This article reports results regarding the properties of accessibility, including its relationship with the average minimal time to visit all nodes reachable after h steps along a random walk starting from a source, as well as the number of nodes that are visited after a finite period of time. We characterize the relationship between accessibility and the average number of walks required in order to visit all reachable nodes (the exploration time), conjecture that the maximum accessibility implies the minimal exploration time, and confirm the relationship between the accessibility values and the number of nodes visited after a basic time unit. The latter relationship is investigated with respect to three types of dynamics: traditional random walks, self-avoiding random walks, and preferential random walks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove that asymptotically (as n -> infinity) almost all graphs with n vertices and C(d)n(2-1/2d) log(1/d) n edges are universal with respect to the family of all graphs with maximum degree bounded by d. Moreover, we provide an efficient deterministic embedding algorithm for finding copies of bounded degree graphs in graphs satisfying certain pseudorandom properties. We also prove a counterpart result for random bipartite graphs, where the threshold number of edges is even smaller but the embedding is randomized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.