897 resultados para Minimization Problem, Lattice Model
Resumo:
This work presents numerical simulations of two fluid flow problems involving moving free surfaces: the impacting drop and fluid jet buckling. The viscoelastic model used in these simulations is the eXtended Pom-Pom (XPP) model. To validate the code, numerical predictions of the drop impact problem for Newtonian and Oldroyd-B fluids are presented and compared with other methods. In particular, a benchmark on numerical simulations for a XPP drop impacting on a rigid plate is performed for a wide range of the relevant parameters. Finally, to provide an additional application of free surface flows of XPP fluids, the viscous jet buckling problem is simulated and discussed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper provides additional validation to the problem of estimating wave spectra based on the first-order motions of a moored vessel. Prior investigations conducted by the authors have attested that even a large-volume ship, such as an FPSO unit, could be adopted for on-board estimation of the wave field. The obvious limitation of the methodology concerns filtering of high-frequency wave components, for which the vessel has no significant response. As a result, the estimation range is directly dependent on the characteristics of the vessel response. In order to extend this analysis, further small-scale tests were performed with a model of a pipe-laying crane-barge. When compared to the FPSO case, the results attest that a broader range of typical sea states can be accurately estimated, including crossed-sea states with low peak periods. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We analyse the phase diagram of a quantum mean spherical model in terms of the temperature T, a quantum parameter g, and the ratio p = -J(2)/J(1) where J(1) > 0 refers to ferromagnetic interactions between first-neighbour sites along the d directions of a hypercubic lattice, and J(2) < 0 is associated with competing anti ferromagnetic interactions between second neighbours along m <= d directions. We regain a number of known results for the classical version of this model, including the topology of the critical line in the g = 0 space, with a Lifshitz point at p = 1/4, for d > 2, and closed-form expressions for the decay of the pair correlations in one dimension. In the T = 0 phase diagram, there is a critical border, g(c) = g(c) (p) for d >= 2, with a singularity at the Lifshitz point if d < (m + 4)/2. We also establish upper and lower critical dimensions, and analyse the quantum critical behavior in the neighborhood of p = 1/4. 2012 (C) Elsevier B.V. All rights reserved.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
We have performed multicanonical simulations to study the critical behavior of the two-dimensional Ising model with dipole interactions. This study concerns the thermodynamic phase transitions in the range of the interaction delta where the phase characterized by striped configurations of width h = 1 is observed. Controversial results obtained from local update algorithms have been reported for this region, including the claimed existence of a second-order phase transition line that becomes first order above a tricritical point located somewhere between delta = 0.85 and 1. Our analysis relies on the complex partition function zeros obtained with high statistics from multicanonical simulations. Finite size scaling relations for the leading partition function zeros yield critical exponents. that are clearly consistent with a single second-order phase transition line, thus excluding such a tricritical point in that region of the phase diagram. This conclusion is further supported by analysis of the specific heat and susceptibility of the orientational order parameter.
Resumo:
We consider an interacting particle system representing the spread of a rumor by agents on the d-dimensional integer lattice. Each agent may be in any of the three states belonging to the set {0,1,2}. Here 0 stands for ignorants, 1 for spreaders and 2 for stiflers. A spreader tells the rumor to any of its (nearest) ignorant neighbors at rate lambda. At rate alpha a spreader becomes a stifler due to the action of other (nearest neighbor) spreaders. Finally, spreaders and stiflers forget the rumor at rate one. We study sufficient conditions under which the rumor either becomes extinct or survives with positive probability.
Resumo:
This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.
Resumo:
The fast and strong social and economic transformations in the economies of many countries has raised the competition for consumers. One of the elements required to adapt to such scenario is knowing customers and their perceptions about products or services, mainly regarding word of mouth recommendations. This study adapts, to the fast food business, a model originally designed to analyze the antecedents of the intent to recommend by clients of formal restaurants. Three constructs were considered: service quality, satisfaction, and social well-being, the latter comprised of positive and negative affections. Six hypotheses were considered, three of which relating to social well-being (that it influences satisfaction, service quality, and the intent to recommend), two relating to service quality (that in influences the intent to recommend and satisfaction), and one relating to the influence of satisfaction on the intent to recommend. None was rejected, indicating adherence and adjustment of the simplication and adaptation of the consolidated model. Through a successful empirical application, the main contribution made by this research is the simplification of a model through its application in a similar context, but with a different scope.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We investigate the classical integrability of the Alday-Arutyunov-Frolov model, and show that the Lax connection can be reduced to a simpler 2 x 2 representation. Based on this result, we calculate the algebra between the L-operators and find that it has a highly non-ultralocal form. We then employ and make a suitable generalization of the regularization technique proposed by Mail let for a simpler class of non-ultralocal models, and find the corresponding r- and s-matrices. We also make a connection between the operator-regularization method proposed earlier for the quantum case, and the Mail let's symmetric limit regularization prescription used for non-ultralocal algebras in the classical theory.
Resumo:
We present the first numerical implementation of the minimal Landau background gauge for Yang-Mills theory on the lattice. Our approach is a simple generalization of the usual minimal Landau gauge and is formulated for the general SU(N) gauge group. We also report on preliminary tests of the method in the four-dimensional SU(2) case, using different background fields. Our tests show that the convergence of the numerical minimization process is comparable to the case of a null background. The uniqueness of the minimizing functional employed is briefly discussed.
Resumo:
This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We consider a two-parameter family of Z(2) gauge theories on a lattice discretization T(M) of a three-manifold M and its relation to topological field theories. Familiar models such as the spin-gauge model are curves on a parameter space Gamma. We show that there is a region Gamma(0) subset of Gamma where the partition function and the expectation value h < W-R(gamma)> i of the Wilson loop can be exactly computed. Depending on the point of Gamma(0), the model behaves as topological or quasi-topological. The partition function is, up to a scaling factor, a topological number of M. The Wilson loop on the other hand, does not depend on the topology of gamma. However, for a subset of Gamma(0), < W-R(gamma)> depends on the size of gamma and follows a discrete version of an area law. At the zero temperature limit, the spin-gauge model approaches the topological and the quasi-topological regions depending on the sign of the coupling constant.