915 resultados para pseudo-random permutation
Resumo:
We perform numerical simulations, including parallel tempering, a four-state Potts glass model with binary random quenched couplings using the JANUS application-oriented computer. We find and characterize a glassy transition, estimating the critical temperature and the value of the critical exponents. Nevertheless, the extrapolation to infinite volume is hampered by strong scaling corrections. We show that there is no ferromagnetic transition in a large temperature range around the glassy critical temperature. We also compare our results with those obtained recently on the “random permutation” Potts glass.
Resumo:
In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.
Resumo:
We describe a modification to a previously published pseudorandom number generator improving security while maintaining high performance. The proposed generator is based on the powers of a word-packed block upper triangular matrix and it is designed to be fast and easy to implement in software since it mainly involves bitwise operations between machine registers and, in our tests, it presents excellent security and statistical characteristics. The modifications include a new, key-derived s-box based nonlinear output filter and improved seeding and extraction mechanisms. This output filter can also be applied to other generators.
Resumo:
An experimental testing system for the study of the dynamic behavior of fluid-loaded rectangular micromachined silicon plates is designed and presented in this paper. In this experimental system, the base-excitation technique combined with pseudo-random signal and cross-correlation analysis is applied to test fluid-loaded microstructures. Theoretical model is also derived to reveal the mechanism of such an experimental system in the application of testing fluid-loaded microstructures. The dynamic experiments cover a series of testings of various microplates with different boundary conditions and dimensions, both in air and immersed in water. This paper is the first that demonstrates the ability and performances of base excitation in the application of dynamic testing of microstructures that involves a natural fluid environment. Traditional modal analysis approaches are used to evaluate natural frequencies, modal damping and mode shapes from the experimental data. The obtained experimental results are discussed and compared with theoretical predictions. This research experimentally determines the dynamic characteristics of the fluid-loaded silicon microplates, which can contribute to the design of plate-based microsystems. The experimental system and testing approaches presented in this paper can be widely applied to the investigation of the dynamics of microstructures and nanostructures.
Resumo:
2000 Mathematics Subject Classification: 94A29, 94B70
Resumo:
We present experimental results for wavelength-division multiplexed (WDM) transmission performance using unbalanced proportions of 1s and 0s in pseudo-random bit sequence (PRBS) data. This investigation simulates the effect of local, in time, data unbalancing which occurs in some coding systems such as forward error correction when extra bits are added to the WDM data stream. We show that such local unbalancing, which would practically give a time-dependent error-rate, can be employed to improve the legacy long-haul WDM system performance if the system is allowed to operate in the nonlinear power region. We use a recirculating loop to simulate a long-haul fibre system.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
Recent studies on proteins whose N and C termini are in close proximity have demonstrated that folding of polypeptide chains and assembly of oligomers can be accomplished with circularly permuted chains. As yet no methodical study has been conducted to determine how extensively new termini can be introduced and where such termini cannot be tolerated. We have devised a procedure to generate random circular permutations of the catalytic chains of Escherichia coli aspartate transcarbamoylase (ATCase; EC 2.1.3.2) and to select clones that produce active or stable holoenzyme containing permuted chains. A tandem gene construct was made, based on the desired linkage between amino acid residues in the C- and N-terminal regions of the polypeptide chain, and this DNA was treated with a suitable restriction enzyme to yield a fragment containing the rearranged coding sequence for the chain. Circularization achieved with DNA ligase, followed by linearization at random with DNase I, and incorporation of the linearized, repaired, blunt-ended, rearranged genes into a suitable plasmid permitted the expression of randomly permuted polypeptide chains. The plasmid with appropriate stop codons also contained pyrI, the gene encoding the regulatory chain of ATCase. Colonies expressing detectable amounts of ATCase-like molecules containing permuted catalytic chains were identified by an immunoblot technique or by their ability to grow in the absence of pyrimidines in the growth medium. Sequencing of positive clones revealed a variety of novel circular permutations. Some had N and C termini within helices of the wild-type enzyme as well as deletions and insertions. Permutations were concentrated in the C-terminal domain and only few were detected in the N-terminal domain. The technique, which is adaptable generally to proteins whose N and C termini are near each other, can be of value in relating in vivo folding of nascent, growing polypeptide chains to in vitro renaturation of complete chains and determining the role of protein sequence in folding kinetics.
Resumo:
A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.
Resumo:
Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.
Resumo:
Hierarchical clustering is a popular method for finding structure in multivariate data,resulting in a binary tree constructed on the particular objects of the study, usually samplingunits. The user faces the decision where to cut the binary tree in order to determine the numberof clusters to interpret and there are various ad hoc rules for arriving at a decision. A simplepermutation test is presented that diagnoses whether non-random levels of clustering are presentin the set of objects and, if so, indicates the specific level at which the tree can be cut. The test isvalidated against random matrices to verify the type I error probability and a power study isperformed on data sets with known clusteredness to study the type II error.
Resumo:
1. Few examples of habitat-modelling studies of rare and endangered species exist in the literature, although from a conservation perspective predicting their distribution would prove particularly useful. Paucity of data and lack of valid absences are the probable reasons for this shortcoming. Analytic solutions to accommodate the lack of absence include the ecological niche factor analysis (ENFA) and the use of generalized linear models (GLM) with simulated pseudo-absences. 2. In this study we tested a new approach to generating pseudo-absences, based on a preliminary ENFA habitat suitability (HS) map, for the endangered species Eryngium alpinum. This method of generating pseudo-absences was compared with two others: (i) use of a GLM with pseudo-absences generated totally at random, and (ii) use of an ENFA only. 3. The influence of two different spatial resolutions (i.e. grain) was also assessed for tackling the dilemma of quality (grain) vs. quantity (number of occurrences). Each combination of the three above-mentioned methods with the two grains generated a distinct HS map. 4. Four evaluation measures were used for comparing these HS maps: total deviance explained, best kappa, Gini coefficient and minimal predicted area (MPA). The last is a new evaluation criterion proposed in this study. 5. Results showed that (i) GLM models using ENFA-weighted pseudo-absence provide better results, except for the MPA value, and that (ii) quality (spatial resolution and locational accuracy) of the data appears to be more important than quantity (number of occurrences). Furthermore, the proposed MPA value is suggested as a useful measure of model evaluation when used to complement classical statistical measures. 6. Synthesis and applications. We suggest that the use of ENFA-weighted pseudo-absence is a possible way to enhance the quality of GLM-based potential distribution maps and that data quality (i.e. spatial resolution) prevails over quantity (i.e. number of data). Increased accuracy of potential distribution maps could help to define better suitable areas for species protection and reintroduction.
Resumo:
In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.
Resumo:
Nonparametric simple-contrast estimates for one-way layouts based on Hodges-Lehmann estimators for two samples and confidence intervals for all contrasts involving only two treatments are found in the literature.Tests for such contrasts are performed from the distribution of the maximum of the rank sum between two treatments. For random block designs, simple contrast estimates based on Hodges-Lehmann estimators for one sample are presented. However, discussions concerning the significance levels of more complex contrast tests in nonparametric statistics are not well outlined.This work aims at presenting a methodology to obtain p-values for any contrast types based on the construction of the permutations required by each design model using a C-language program for each design type. For small samples, all possible treatment configurations are performed in order to obtain the desired p-value. For large samples, a fixed number of random configurations are used. The program prompts the input of contrast coefficients, but does not assume the existence or orthogonality among them.In orthogonal contrasts, the decomposition of the value of the suitable statistic for each case is performed and it is observed that the same procedure used in the parametric analysis of variance can be applied in the nonparametric case, that is, each of the orthogonal contrasts has a chi(2) distribution with one degree of freedom. Also, the similarities between the p-values obtained for nonparametric contrasts and those obtained through approximations suggested in the literature are discussed.
Resumo:
The existence of a small partition of a combinatorial structure into random-like subparts, a so-called regular partition, has proven to be very useful in the study of extremal problems, and has deep algorithmic consequences. The main result in this direction is the Szemeredi Regularity Lemma in graph theory. In this note, we are concerned with regularity in permutations: we show that every permutation of a sufficiently large set has a regular partition into a small number of intervals. This refines the partition given by Cooper (2006) [10], which required an additional non-interval exceptional class. We also introduce a distance between permutations that plays an important role in the study of convergence of a permutation sequence. (C) 2011 Elsevier B.V. All rights reserved.