975 resultados para Minimization Problem, Lattice Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A full quantitative understanding of the protein folding problem is now becoming possible with the help of the energy landscape theory and the protein folding funnel concept. Good folding sequences have a landscape that resembles a rough funnel where the energy bias towards the native state is larger than its ruggedness. Such a landscape leads not only to fast folding and stable native conformations but, more importantly, to sequences that are robust to variations in the protein environment and to sequence mutations. In this paper, an off-lattice model of sequences that fold into a β-barrel native structure is used to describe a framework that can quantitatively distinguish good and bad folders. The two sequences analyzed have the same native structure, but one of them is minimally frustrated whereas the other one exhibits a high degree of frustration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inverse problems based on using experimental data to estimate unknown parameters of a system often arise in biological and chaotic systems. In this paper, we consider parameter estimation in systems biology involving linear and non-linear complex dynamical models, including the Michaelis–Menten enzyme kinetic system, a dynamical model of competence induction in Bacillus subtilis bacteria and a model of feedback bypass in B. subtilis bacteria. We propose some novel techniques for inverse problems. Firstly, we establish an approximation of a non-linear differential algebraic equation that corresponds to the given biological systems. Secondly, we use the Picard contraction mapping, collage methods and numerical integration techniques to convert the parameter estimation into a minimization problem of the parameters. We propose two optimization techniques: a grid approximation method and a modified hybrid Nelder–Mead simplex search and particle swarm optimization (MH-NMSS-PSO) for non-linear parameter estimation. The two techniques are used for parameter estimation in a model of competence induction in B. subtilis bacteria with noisy data. The MH-NMSS-PSO scheme is applied to a dynamical model of competence induction in B. subtilis bacteria based on experimental data and the model for feedback bypass. Numerical results demonstrate the effectiveness of our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to explore the role of leadership in problem-oriented policing (POP). Design/methodology/approach – This paper uses interrupted time series models to isolate the impact on crime trends of a transformational leader's efforts to spearhead the implementation of a program of POP, called the problem solving model (PSM), in a southern state in Australia. Findings – This paper finds that the PSM led directly to an impact on overall crime, with a significant reduction in crimes per 100,000 persons per year after the introduction of the PSM. The majority of the overall crime drop attributable to implementation of POP was driven by reductions in property crime. It was noted that the leadership influence of the PSM was not effective in reducing all types of crime. Crimes against the person where not affected by the introduction of the PSM and public nuisance crimes largely followed the forecasted, upward trajectory. Practical implications – The driver behind the PSM was Commissioner Hyde and the success of the PSM is largely attributable to his strong commitment to transformational leadership and a top-down approach to implementation. These qualities encapsulate the original ideas behind POP that Goldstein (1979, 2003), back in 1979, highlighted as critical for the success of future POP programs. Social implications – Reducing crime is an important part of creating safe communities and improving quality of life for all citizens. This research shows that successful implementation of the PSM within South Australia under the strong leadership of Commissioner Hyde was a major factor in reducing property crime and overall crime rates. Originality/value – This paper is valuable because it demonstrates the link between strong leadership in policing, the commissioner's vision for POP and how his vision then translated into widespread adoption of POP. The study empirically shows that the statewide adoption of POP led to significant reductions in crime, particularly property crime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we first recast the generalized symmetric eigenvalue problem, where the underlying matrix pencil consists of symmetric positive definite matrices, into an unconstrained minimization problem by constructing an appropriate cost function, We then extend it to the case of multiple eigenvectors using an inflation technique, Based on this asymptotic formulation, we derive a quasi-Newton-based adaptive algorithm for estimating the required generalized eigenvectors in the data case. The resulting algorithm is modular and parallel, and it is globally convergent with probability one, We also analyze the effect of inexact inflation on the convergence of this algorithm and that of inexact knowledge of one of the matrices (in the pencil) on the resulting eigenstructure. Simulation results demonstrate that the performance of this algorithm is almost identical to that of the rank-one updating algorithm of Karasalo. Further, the performance of the proposed algorithm has been found to remain stable even over 1 million updates without suffering from any error accumulation problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, numerical modelling of fracture in concrete using two-dimensional lattice model is presented and also a few issues related to lattice modelling technique applicable to concrete fracture are reviewed. A comparison is made with acoustic emission (AE) events with the number of fractured elements. To implement the heterogeneity of the plain concrete, two methods namely, by generating grain structure of the concrete using Fuller's distribution and the concrete material properties are randomly distributed following Gaussian distribution are used. In the first method, the modelling of the concrete at meso level is carried out following the existing methods available in literature. The shape of the aggregates present in the concrete are assumed as perfect spheres and shape of the same in two-dimensional lattice network is circular. A three-point bend (TPB) specimen is tested in the experiment under crack mouth opening displacement (CMOD) control at a rate of 0.0004 mm/sec and the fracture process in the same TPB specimen is modelled using regular triangular 2D lattice network. Load versus crack mouth opening isplacement (CMOD) plots thus obtained by using both the methods are compared with experimental results. It was observed that the number of fractured elements increases near the peak load and beyond the peak load. That is once the crack starts to propagate. AE hits also increase rapidly beyond the peak load. It is compulsory here to mention that although the lattice modelling of concrete fracture used in this present study is very similar to those already available in literature, the present work brings out certain finer details which are not available explicitly in the earlier works.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the exact solution to a one-dimensional multicomponent quantum lattice model interacting by an exchange operator which falls off as the inverse sinh square of the distance. This interaction contains a variable range as a parameter and can thus interpolate between the known solutions for the nearest-neighbor chain and the inverse-square chain. The energy, susceptibility, charge stiffness, and the dispersion relations for low-lying excitations are explicitly calculated for the absolute ground state, as a function of both the range of the interaction and the number of species of fermions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present study singular fractal functions (SFF) were used to generate stress-strain plots for quasibrittle material like concrete and cement mortar and subsequently stress-strain plot of cement mortar obtained using SFF was used for modeling fracture process in concrete. The fracture surface of concrete is rough and irregular. The fracture surface of concrete is affected by the concrete's microstructure that is influenced by water cement ratio, grade of cement and type of aggregate 11-41. Also the macrostructural properties such as the size and shape of the specimen, the initial notch length and the rate of loading contribute to the shape of the fracture surface of concrete. It is known that concrete is a heterogeneous and quasi-brittle material containing micro-defects and its mechanical properties strongly relate to the presence of micro-pores and micro-cracks in concrete 11-41. The damage in concrete is believed to be mainly due to initiation and development of micro-defects with irregularity and fractal characteristics. However, repeated observations at various magnifications also reveal a variety of additional structures that fall between the `micro' and the `macro' and have not yet been described satisfactorily in a systematic manner [1-11,15-17]. The concept of singular fractal functions by Mosolov was used to generate stress-strain plot of cement concrete, cement mortar and subsequently the stress-strain plot of cement mortar was used in two-dimensional lattice model [28]. A two-dimensional lattice model was used to study concrete fracture by considering softening of matrix (cement mortar). The results obtained from simulations with lattice model show softening behavior of concrete and fairly agrees with the experimental results. The number of fractured elements are compared with the acoustic emission (AE) hits. The trend in the cumulative fractured beam elements in the lattice fracture simulation reasonably reflected the trend in the recorded AE measurements. In other words, the pattern in which AE hits were distributed around the notch has the same trend as that of the fractured elements around the notch which is in support of lattice model. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the effects of extended and localized potentials and a magnetic field on the Dirac electrons residing at the surface of a three-dimensional topological insulator like Bi2Se3. We use a lattice model to numerically study the various states; we show how the potentials can be chosen in a way which effectively avoids the problem of fermion doubling on a lattice. We show that extended potentials of different shapes can give rise to states which propagate freely along the potential but decay exponentially away from it. For an infinitely long potential barrier, the dispersion and spin structure of these states are unusual and these can be varied continuously by changing the barrier strength. In the presence of a magnetic field applied perpendicular to the surface, these states become separated from the gapless surface states by a gap, thereby giving rise to a quasi-one-dimensional system. Similarly, a magnetic field along with a localized potential can give rise to exponentially localized states which are separated from the surface states by a gap and thereby form a zero-dimensional system. Finally, we show that a long barrier and an impurity potential can produce bound states which are localized at the impurity, and an ``L''-shaped potential can have both bound states at the corner of the L and extended states which travel along the arms of the potential. Our work opens the way to constructing wave guides for Dirac electrons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using numerical diagonalization we study the crossover among different random matrix ensembles (Poissonian, Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE) and Gaussian symplectic ensemble (GSE)) realized in two different microscopic models. The specific diagnostic tool used to study the crossovers is the level spacing distribution. The first model is a one-dimensional lattice model of interacting hard-core bosons (or equivalently spin 1/2 objects) and the other a higher dimensional model of non-interacting particles with disorder and spin-orbit coupling. We find that the perturbation causing the crossover among the different ensembles scales to zero with system size as a power law with an exponent that depends on the ensembles between which the crossover takes place. This exponent is independent of microscopic details of the perturbation. We also find that the crossover from the Poissonian ensemble to the other three is dominated by the Poissonian to GOE crossover which introduces level repulsion while the crossover from GOE to GUE or GOE to GSE associated with symmetry breaking introduces a subdominant contribution. We also conjecture that the exponent is dependent on whether the system contains interactions among the elementary degrees of freedom or not and is independent of the dimensionality of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Minimization problems with respect to a one-parameter family of generalized relative entropies are studied. These relative entropies, which we term relative alpha-entropies (denoted I-alpha), arise as redundancies under mismatched compression when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the usual relative entropy (Kullback-Leibler divergence). Just like relative entropy, these relative alpha-entropies behave like squared Euclidean distance and satisfy the Pythagorean property. Minimizers of these relative alpha-entropies on closed and convex sets are shown to exist. Such minimizations generalize the maximum Renyi or Tsallis entropy principle. The minimizing probability distribution (termed forward I-alpha-projection) for a linear family is shown to obey a power-law. Other results in connection with statistical inference, namely subspace transitivity and iterated projections, are also established. In a companion paper, a related minimization problem of interest in robust statistics that leads to a reverse I-alpha-projection is studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To simulate fracture behaviors in concrete more realistically, a theoretical analysis on the potential question in the quasi-static method is presented, then a novel algorithm is proposed which takes into account the inertia effect due to unstable crack propagation and meanwhile requests much lower computational efforts than purely dynamic method. The inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, results may become questionable if a fracture process including unstable cracking is simulated by the quasi-static procedure excluding completely inertia effects. However, it requires much higher computational effort to simulate experiments with not very high loading rates by the dynamic method. In this investigation which can be taken as a natural continuation, the potential question of quasi-static method is analyzed based on the dynamic equations of motion. One solution to this question is the new algorithm mentioned above. Numerical examples are provided by the generalized beam (GB) lattice model to show both fracture processes under different loading rates and capability of the new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.