943 resultados para MCDONALD EXTENDED EXPONENTIAL MODEL
Resumo:
Consider N sites randomly and uniformly distributed in a d-dimensional hypercube. A walker explores this disordered medium going to the nearest site, which has not been visited in the last mu (memory) steps. The walker trajectory is composed of a transient part and a periodic part (cycle). For one-dimensional systems, travelers can or cannot explore all available space, giving rise to a crossover between localized and extended regimes at the critical memory mu(1) = log(2) N. The deterministic rule can be softened to consider more realistic situations with the inclusion of a stochastic parameter T (temperature). In this case, the walker movement is driven by a probability density function parameterized by T and a cost function. The cost function increases as the distance between two sites and favors hops to closer sites. As the temperature increases, the walker can escape from cycles that are reminiscent of the deterministic nature and extend the exploration. Here, we report an analytical model and numerical studies of the influence of the temperature and the critical memory in the exploration of one-dimensional disordered systems.
Resumo:
The nuclear gross theory, originally formulated by Takahashi and Yamada (1969 Prog. Theor. Phys. 41 1470) for the beta-decay, is applied to the electronic-neutrino nucleus reactions, employing a more realistic description of the energetics of the Gamow-Teller resonances. The model parameters are gauged from the most recent experimental data, both for beta(-)-decay and electron capture, separately for even-even, even-odd, odd-odd and odd-even nuclei. The numerical estimates for neutrino-nucleus cross-sections agree fairly well with previous evaluations done within the framework of microscopic models. The formalism presented here can be extended to the heavy nuclei mass region, where weak processes are quite relevant, which is of astrophysical interest because of its applications in supernova explosive nucleosynthesis.
Resumo:
The existence of juxtaposed regions of distinct cultures in spite of the fact that people's beliefs have a tendency to become more similar to each other's as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors' opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L(2) whereas the size of the largest cluster grows with ln L(2). The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model-Axelrod's model-we found that these opinion domains are unstable to the effect of a thermal-like noise.
Resumo:
This article presents a BEM formulation developed particularly for analysis of plates reinforced by rectangular beams. This is an extended version of a Previous paper that only took into account bending effects. The problem is now re-formulated to consider bending and membrane force effects. The effects of the reinforcements are taken into account by using a simplified scheme that requires application of ail initial stress field to locally correct the bending and stretching stiffness of the reinforcement regions. The domain integrals due to the presence of the reinforcements are then transformed to the reinforcement/plate interface. To reduce the number of degrees of freedom related to the presence of the reinforcement, the proposed model was simplified to consider only bending and stretching rigidities in the direction of the beams. The complete model can be recovered by applying all six internal force correctors, corresponding to six degrees of freedom per node. Examples are presented to confirm the accuracy of the formulation and to illustrate the level of simplification introduced by this strong reduction in the number of degrees of freedom. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Petri net (PN) modeling is one of the most used formal methods in the automation applications field, together with programmable logic controllers (PLCs). Therefore, the creation of a modeling methodology for PNs compatible with the IEC61131 standard is a necessity of automation specialists. Different works dealing with this subject have been carried out; they are presented in the first part of this paper [Frey (2000a, 2000b); Peng and Zhou (IEEE Trans Syst Man Cybern, Part C Appl Rev 34(4):523-531, 2004); Uzam and Jones (Int J Adv Manuf Technol 14(10):716-728, 1998)], but they do not present a completely compatible methodology with this standard. At the same time, they do not maintain the simplicity required for such applications, nor the use of all-graphical and all-mathematical ordinary Petri net (OPN) tools to facilitate model verification and validation. The proposal presented here completes these requirements. Educational applications at the USP and UEA (Brazil) and the UO (Cuba), as well as industrial applications in Brazil and Cuba, have already been carried out with good results.
Resumo:
A model predictive controller (MPC) is proposed, which is robustly stable for some classes of model uncertainty and to unknown disturbances. It is considered as the case of open-loop stable systems, where only the inputs and controlled outputs are measured. It is assumed that the controller will work in a scenario where target tracking is also required. Here, it is extended to the nominal infinite horizon MPC with output feedback. The method considers an extended cost function that can be made globally convergent for any finite input horizon considered for the uncertain system. The method is based on the explicit inclusion of cost contracting constraints in the control problem. The controller considers the output feedback case through a non-minimal state-space model that is built using past output measurements and past input increments. The application of the robust output feedback MPC is illustrated through the simulation of a low-order multivariable system.
Resumo:
A procedure is proposed for the determination of the residence time distribution (RTD) of curved tubes taking into account the non-ideal detection of the tracer. The procedure was applied to two holding tubes used for milk pasteurization in laboratory scale. Experimental data was obtained using an ionic tracer. The signal distortion caused by the detection system was considerable because of the short residence time. Four RTD models, namely axial dispersion, extended tanks in series, generalized convection and PER + CSTR association, were adjusted after convolution with the E-curve of the detection system. The generalized convection model provided the best fit because it could better represent the tail on the tracer concentration curve that is Caused by the laminar velocity profile and the recirculation regions. Adjusted model parameters were well cot-related with the now rate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper concern the development of a stable model predictive controller (MPC) to be integrated with real time optimization (RTO) in the control structure of a process system with stable and integrating outputs. The real time process optimizer produces Optimal targets for the system inputs and for Outputs that Should be dynamically implemented by the MPC controller. This paper is based oil a previous work (Comput. Chem. Eng. 2005, 29, 1089) where a nominally stable MPC was proposed for systems with the conventional control approach where only the outputs have set points. This work is also based oil the work of Gonzalez et at. (J. Process Control 2009, 19, 110) where the zone control of stable systems is studied. The new control for is obtained by defining ail extended control objective that includes input targets and zone controller the outputs. Additional decision variables are also defined to increase the set of feasible solutions to the control problem. The hard constraints resulting from the cancellation of the integrating modes Lit the end of the control horizon are softened,, and the resulting control problem is made feasible to a large class of unknown disturbances and changes of the optimizing targets. The methods are illustrated with the simulated application of the proposed,approaches to a distillation column of the oil refining industry.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Integrable Kondo impurities in the one-dimensional supersymmetric U model of strongly correlated electrons are studied by means of the boundary graded quantum inverse scattering method. The boundary K-matrices depending on the local magnetic moments of the impurities are presented as non-trivial realizations of the reflection equation algebras in an impurity Hilbert space. Furthermore, the model Hamiltonian is diagonalized and the Bethe ansatz equations are derived. It is interesting to note that our model exhibits a free parameter in the bulk Hamiltonian but no free parameter exists on the boundaries. This is in sharp contrast to the impurity models arising from the supersymmetric t-J and extended Hubbard models where there is no free parameter in the bulk but there is a free parameter on each boundary.
Resumo:
A mixture model for long-term survivors has been adopted in various fields such as biostatistics and criminology where some individuals may never experience the type of failure under study. It is directly applicable in situations where the only information available from follow-up on individuals who will never experience this type of failure is in the form of censored observations. In this paper, we consider a modification to the model so that it still applies in the case where during the follow-up period it becomes known that an individual will never experience failure from the cause of interest. Unless a model allows for this additional information, a consistent survival analysis will not be obtained. A partial maximum likelihood (ML) approach is proposed that preserves the simplicity of the long-term survival mixture model and provides consistent estimators of the quantities of interest. Some simulation experiments are performed to assess the efficiency of the partial ML approach relative to the full ML approach for survival in the presence of competing risks.
Resumo:
Three kinds of integrable Kondo problems in one-dimensional extended Hubbard models are studied by means of the boundary graded quantum inverse scattering method. The boundary K matrices depending on the local moments of the impurities are presented as a nontrivial realization of the graded reflection equation algebras acting in a (2s alpha + 1)-dimensional impurity Hilbert space. Furthermore, these models are solved using the algebraic Bethe ansatz method, and the Bethe ansatz equations are obtained.
Resumo:
The convection-dispersion model and its extended form have been used to describe solute disposition in organs and to predict hepatic availabilities. A range of empirical transit-time density functions has also been used for a similar purpose. The use of the dispersion model with mixed boundary conditions and transit-time density functions has been queried recently by Hisaka and Sugiyanaa in this journal. We suggest that, consistent with soil science and chemical engineering literature, the mixed boundary conditions are appropriate providing concentrations are defined in terms of flux to ensure continuity at the boundaries and mass balance. It is suggested that the use of the inverse Gaussian or other functions as empirical transit-time densities is independent of any boundary condition consideration. The mixed boundary condition solutions of the convection-dispersion model are the easiest to use when linear kinetics applies. In contrast, the closed conditions are easier to apply in a numerical analysis of nonlinear disposition of solutes in organs. We therefore argue that the use of hepatic elimination models should be based on pragmatic considerations, giving emphasis to using the simplest or easiest solution that will give a sufficiently accurate prediction of hepatic pharmacokinetics for a particular application. (C) 2000 Wiley-Liss Inc. and the American Pharmaceutical Association J Pharm Sci 89:1579-1586, 2000.
Resumo:
The distributed-tubes model of hepatic elimination is extended to include intermixing between sinusoids, resulting in the formulation of a new, interconnected-tubes model. The new model is analysed for the simple case of two interconnected tubes, where an exact solution is obtained. For the case of many strongly-interconnected tubes, it is shown that a zeroth-order approximation leads to the convection-dispersion model. As a consequence the dispersion number is expressed, for the first time, in terms of its main physiological determinants: heterogeneity of flow and density of interconnections between sinusoids. The analysis of multiple indicator dilution data from a perfused liver preparation using the simplest version of the model yields the estimate 10.3 for the average number of interconnections. The problem of boundary conditions for the dispersion model is considered from the viewpoint that the dispersion-convection equation is a zeroth-order approximation to the equations for the interconnected-tubes model. (C) 1997 Academic Press Limited.
Resumo:
The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.