934 resultados para RANDOM PERMUTATION MODEL
Resumo:
We show that the Kronecker sum of d >= 2 copies of a random one-dimensional sparse model displays a spectral transition of the type predicted by Anderson, from absolutely continuous around the center of the band to pure point around the boundaries. Possible applications to physics and open problems are discussed briefly.
Resumo:
In this work, we present a supersymmetric extension of the quantum spherical model, both in components and also in the superspace formalisms. We find the solution for short- and long-range interactions through the imaginary time formalism path integral approach. The existence of critical points (classical and quantum) is analyzed and the corresponding critical dimensions are determined.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
Objective: This study aimed to investigate the effect of 830 and 670 nm diode laser on the viability of random skin flaps in rats. Background data: Low-level laser therapy (LLLT) has been reported to be successful in stimulating the formation of new blood vessels and reducing the inflammatory process after injury. However, the efficiency of such treatment remains uncertain, and there is also some controversy regarding the efficacy of different wavelengths currently on the market. Materials and methods: Thirty Wistar rats were used and divided into three groups, with 10 rats in each. A random skin flap was raised on the dorsum of each animal. Group 1 was the control group, group 2 received 830 nm laser radiations, and group 3 was submitted to 670 nm laser radiation (power density = 0.5 mW/cm(2)). The animals underwent laser therapy with 36 J/cm(2) energy density (total energy = 2.52 J and 72 sec per session) immediately after surgery and on the 4 subsequent days. The application site of laser radiation was one point at 2.5 cm from the flap's cranial base. The percentage of skin flap necrosis area was calculated on the 7th postoperative day using the paper template method. A skin sample was collected immediately after to determine the vascular endothelial growth factor (VEGF) expression and the epidermal cell proliferation index (KiD67). Results: Statistically significant differences were found among the percentages of necrosis, with higher values observed in group 1 compared with groups 2 and 3. No statistically significant differences were found among these groups using the paper template method. Group 3 presented the highest mean number of blood vessels expressing VEGF and of cells in the proliferative phase when compared with groups 1 and 2. Conclusions: LLLT was effective in increasing random skin flap viability in rats. The 670 nm laser presented more satisfactory results than the 830 nm laser.
Resumo:
We investigate the nonequilibrium roughening transition of a one-dimensional restricted solid-on-solid model by directly sampling the stationary probability density of a suitable order parameter as the surface adsorption rate varies. The shapes of the probability density histograms suggest a typical Ginzburg-Landau scenario for the phase transition of the model, and estimates of the "magnetic" exponent seem to confirm its mean-field critical behavior. We also found that the flipping times between the metastable phases of the model scale exponentially with the system size, signaling the breaking of ergodicity in the thermodynamic limit. Incidentally, we discovered that a closely related model not considered before also displays a phase transition with the same critical behavior as the original model. Our results support the usefulness of off-critical histogram techniques in the investigation of nonequilibrium phase transitions. We also briefly discuss in the appendix a good and simple pseudo-random number generator used in our simulations.
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The ground-state phase diagram of an Ising spin-glass model on a random graph with an arbitrary fraction w of ferromagnetic interactions is analysed in the presence of an external field. Using the replica method, and performing an analysis of stability of the replica-symmetric solution, it is shown that w = 1/2, corresponding to an unbiased spin glass, is a singular point in the phase diagram, separating a region with a spin-glass phase (w < 1/2) from a region with spin-glass, ferromagnetic, mixed and paramagnetic phases (w > 1/2).
Resumo:
Polynomial Chaos Expansion (PCE) is widely recognized as a flexible tool to represent different types of random variables/processes. However, applications to real, experimental data are still limited. In this article, PCE is used to represent the random time-evolution of metal corrosion growth in marine environments. The PCE coefficients are determined in order to represent data of 45 corrosion coupons tested by Jeffrey and Melchers (2001) at Taylors Beach, Australia. Accuracy of the representation and possibilities for model extrapolation are considered in the study. Results show that reasonably accurate smooth representations of the corrosion process can be obtained. The representation is not better because a smooth model is used to represent non-smooth corrosion data. Random corrosion leads to time-variant reliability problems, due to resistance degradation over time. Time variant reliability problems are not trivial to solve, especially under random process loading. Two example problems are solved herein, showing how the developed PCE representations can be employed in reliability analysis of structures subject to marine corrosion. Monte Carlo Simulation is used to solve the resulting time-variant reliability problems. However, an accurate and more computationally efficient solution is also presented.
Resumo:
Particle tracking of microbeads attached to the cytoskeleton (CSK) reveals an intermittent dynamic. The mean squared displacement (MSD) is subdiffusive for small Δt and superdiffusive for large Δt, which are associated with periods of traps and periods of jumps respectively. The analysis of the displacements has shown a non-Gaussian behavior, what is indicative of an active motion, classifying the cells as a far from equilibrium material. Using Langevin dynamics, we reconstruct the dynamic of the CSK. The model is based on the bundles of actin filaments that link themself with the bead RGD coating, trapping it in an harmonic potential. We consider a one- dimensional motion of a particle, neglecting inertial effects (over-damped Langevin dynamics). The resultant force is decomposed in friction force, elastic force and random force, which is used as white noise representing the effect due to molecular agitation. These description until now shows a static situation where the bead performed a random walk in an elastic potential. In order to modeling the active remodeling of the CSK, we vary the equilibrium position of the potential. Inserting a motion in the well center, we change the equilibrium position linearly with time with constant velocity. The result found exhibits a MSD versus time ’tau’ with three regimes. The first regime is when ‘tau’ < ‘tau IND 0’, where ‘tau IND 0’ is the relaxation time, representing the thermal motion. At this regime the particle can diffuse freely. The second regime is a plateau, ‘tau IND 0’ < ‘tau’ < ‘tau IND 1’, representing the particle caged in the potential. Here, ‘tau IND 1’ is a characteristic time that limit the confinement period. And the third regime, ‘tau’ > ‘tau IND 1’, is when the particles are in the superdiffusive behavior. This is where most of the experiments are performed, under 20 frames per second (FPS), thus there is no experimental evidence that support the first regime. We are currently performing experiments with high frequency, up to 100 FPS, attempting to visualize this diffusive behavior. Beside the first regime, our simple model can reproduce MSD curves similar to what has been found experimentally, which can be helpful to understanding CSK structure and properties.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
The ability to represent the transport and fate of an oil slick at the sea surface is a formidable task. By using an accurate numerical representation of oil evolution and movement in seawater, the possibility to asses and reduce the oil-spill pollution risk can be greatly improved. The blowing of the wind on the sea surface generates ocean waves, which give rise to transport of pollutants by wave-induced velocities that are known as Stokes’ Drift velocities. The Stokes’ Drift transport associated to a random gravity wave field is a function of the wave Energy Spectra that statistically fully describe it and that can be provided by a wave numerical model. Therefore, in order to perform an accurate numerical simulation of the oil motion in seawater, a coupling of the oil-spill model with a wave forecasting model is needed. In this Thesis work, the coupling of the MEDSLIK-II oil-spill numerical model with the SWAN wind-wave numerical model has been performed and tested. In order to improve the knowledge of the wind-wave model and its numerical performances, a preliminary sensitivity study to different SWAN model configuration has been carried out. The SWAN model results have been compared with the ISPRA directional buoys located at Venezia, Ancona and Monopoli and the best model settings have been detected. Then, high resolution currents provided by a relocatable model (SURF) have been used to force both the wave and the oil-spill models and its coupling with the SWAN model has been tested. The trajectories of four drifters have been simulated by using JONSWAP parametric spectra or SWAN directional-frequency energy output spectra and results have been compared with the real paths traveled by the drifters.
Resumo:
This thesis deals with three different physical models, where each model involves a random component which is linked to a cubic lattice. First, a model is studied, which is used in numerical calculations of Quantum Chromodynamics.In these calculations random gauge-fields are distributed on the bonds of the lattice. The formulation of the model is fitted into the mathematical framework of ergodic operator families. We prove, that for small coupling constants, the ergodicity of the underlying probability measure is indeed ensured and that the integrated density of states of the Wilson-Dirac operator exists. The physical situations treated in the next two chapters are more similar to one another. In both cases the principle idea is to study a fermion system in a cubic crystal with impurities, that are modeled by a random potential located at the lattice sites. In the second model we apply the Hartree-Fock approximation to such a system. For the case of reduced Hartree-Fock theory at positive temperatures and a fixed chemical potential we consider the limit of an infinite system. In that case we show the existence and uniqueness of minimizers of the Hartree-Fock functional. In the third model we formulate the fermion system algebraically via C*-algebras. The question imposed here is to calculate the heat production of the system under the influence of an outer electromagnetic field. We show that the heat production corresponds exactly to what is empirically predicted by Joule's law in the regime of linear response.
Resumo:
In questa tesi si è studiato l’insorgere di eventi critici in un semplice modello neurale del tipo Integrate and Fire, basato su processi dinamici stocastici markoviani definiti su una rete. Il segnale neurale elettrico è stato modellato da un flusso di particelle. Si è concentrata l’attenzione sulla fase transiente del sistema, cercando di identificare fenomeni simili alla sincronizzazione neurale, la quale può essere considerata un evento critico. Sono state studiate reti particolarmente semplici, trovando che il modello proposto ha la capacità di produrre effetti "a cascata" nell’attività neurale, dovuti a Self Organized Criticality (auto organizzazione del sistema in stati instabili); questi effetti non vengono invece osservati in Random Walks sulle stesse reti. Si è visto che un piccolo stimolo random è capace di generare nell’attività della rete delle fluttuazioni notevoli, in particolar modo se il sistema si trova in una fase al limite dell’equilibrio. I picchi di attività così rilevati sono stati interpretati come valanghe di segnale neurale, fenomeno riconducibile alla sincronizzazione.
Resumo:
We use a conceptual model to investigate how randomly varying building heights within a city affect the atmospheric drag forces and the aerodynamic roughness length of the city. The model is based on the assumptions regarding wake spreading and mutual sheltering effects proposed by Raupach (Boundary-Layer Meteorol 60:375-395, 1992). It is applied both to canopies having uniform building heights and to those having the same building density and mean height, but with variability about the mean. For each simulated urban area, a correction is determined, due to height variability, to the shear stress predicted for the uniform building height case. It is found that u (*)/u (*R) , where u (*) is the friction velocity and u (*R) is the friction velocity from the uniform building height case, is expressed well as an algebraic function of lambda and sigma (h) /h (m) , where lambda is the frontal area index, sigma (h) is the standard deviation of the building height, and h (m) is the mean building height. The simulations also resulted in a simple algebraic relation for z (0)/z (0R) as a function of lambda and sigma (h) /h (m) , where z (0) is the aerodynamic roughness length and z (0R) is z (0) found from the original Raupach formulation for a uniform canopy. Model results are in keeping with those of several previous studies.
Resumo:
Full axon counting of optic nerve cross-sections represents the most accurate method to quantify axonal damage, but such analysis is very labour intensive. Recently, a new method has been developed, termed targeted sampling, which combines the salient features of a grading scheme with axon counting. Preliminary findings revealed the method compared favourably with random sampling. The aim of the current study was to advance our understanding of the effect of sampling patterns on axon counts by comparing estimated axon counts from targeted sampling with those obtained from fixed-pattern sampling in a large collection of optic nerves with different severities of axonal injury.