876 resultados para Minimization of models
Resumo:
The electronic structure and spectrum of several models of the binuclear metal site in soluble CuA domains of cytochrome-c oxidase have been calculated by the use of an extended version of the complete neglect of differential overlap/spectroscopic method. The experimental spectra have two strong transitions of nearly equal intensity around 500 nm and a near-IR transition close to 800 nm. The model that best reproduces these features consists of a dimer of two blue (type 1) copper centers, in which each Cu atom replaces the missing imidazole on the other Cu atom. Thus, both Cu atoms have one cysteine sulfur atom and one imidazole nitrogen atom as ligands, and there are no bridging ligands but a direct Cu-Cu bond. According to the calculations, the two strong bands in the visible region originate from exciton coupling of the dipoles of the two copper monomers, and the near-IR band is a charge-transfer transition between the two Cu atoms. The known amino acid sequence has been used to construct a molecular model of the CuA site by the use of a template and energy minimization. In this model, the two ligand cysteine residues are in one turn of an alpha-helix, whereas one ligand histidine is in a loop following this helix and the other one is in a beta-strand.
Resumo:
In this paper we present algorithms which work on pairs of 0,1- matrices which multiply again a matrix of zero and one entries. When applied over a pair, the algorithms change the number of non-zero entries present in the matrices, meanwhile their product remains unchanged. We establish the conditions under which the number of 1s decreases. We recursively define as well pairs of matrices which product is a specific matrix and such that by applying on them these algorithms, we minimize the total number of non-zero entries present in both matrices. These matrices may be interpreted as solutions for a well known information retrieval problem, and in this case the number of 1 entries represent the complexity of the retrieve and information update operations.
Resumo:
This Paper first provides a review and analysis of the recent trends on innovation infrastructures developed in industrialised countries to promote innovation and competitiveness for high growth SMEs. It specifically aims to examine various spatial models developed to support provision of innovation infrastructure for high growth sector.
Resumo:
Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.
Resumo:
The railhead is severely stressed under the localized wheel contact patch close to the gaps in insulated rail joints. A modified railhead profile in the vicinity of the gapped joint, through a shape optimization model based on a coupled genetic algorithm and finite element method, effectively alters the contact zone and reduces the railhead edge stress concentration significantly. Two optimization methods, a grid search method and a genetic algorithm, were employed for this optimization problem. The optimal results from these two methods are discussed and, in particular, their suitability for the rail end stress minimization problem is studied. Through several numerical examples, the optimal profile is shown to be unaffected by either the magnitude or the contact position of the loaded wheel. The numerical results are validated through a large-scale experimental study.
Resumo:
Currently, open circuit Bayer refineries pump seawater directly into their operations to neutralize the caustic fraction of the Bayer residue. The resulting supernatant has a reduced pH and is pumped back to the marine environment. This investigation has assessed modified seawater sources generated from nanofiltration processes to compare their relative capacities to neutralize bauxite residues. An assessment of the chemical stability of the neutralization products, neutralization efficiency, discharge water quality, bauxite residue composition, and associated economic benefits have been considered to determine the most preferable seawater filtration process based on implementation costs, savings to operations and environmental benefits. The mechanism of neutralization for each technology was determined to be predominately due to the formation of Bayer hydrotalcite and calcium carbonate, however variations in neutralization capacity and efficiencies have been observed. The neutralization efficiency of each feed source has been found to be dependent on the concentration of magnesium, aluminium, calcium and carbonate. Nanofiltered seawater with approximately double the amount of magnesium and calcium required half the volume of seawater to achieve the same degree of neutralization. These studies have revealed that multiple neutralization steps occur throughout the process using characterization techniques such as X-ray diffraction (XRD), infrared (IR) spectroscopy and inductively coupled plasma optical emission spectroscopy (ICP-OES).
Resumo:
Finite element (FE) model studies have made important contributions to our understanding of functional biomechanics of the lumbar spine. However, if a model is used to answer clinical and biomechanical questions over a certain population, their inherently large inter-subject variability has to be considered. Current FE model studies, however, generally account only for a single distinct spinal geometry with one set of material properties. This raises questions concerning their predictive power, their range of results and on their agreement with in vitro and in vivo values. Eight well-established FE models of the lumbar spine (L1-5) of different research centres around the globe were subjected to pure and combined loading modes and compared to in vitro and in vivo measurements for intervertebral rotations, disc pressures and facet joint forces. Under pure moment loading, the predicted L1-5 rotations of almost all models fell within the reported in vitro ranges, and their median values differed on average by only 2° for flexion-extension, 1° for lateral bending and 5° for axial rotation. Predicted median facet joint forces and disc pressures were also in good agreement with published median in vitro values. However, the ranges of predictions were larger and exceeded those reported in vitro, especially for the facet joint forces. For all combined loading modes, except for flexion, predicted median segmental intervertebral rotations and disc pressures were in good agreement with measured in vivo values. In light of high inter-subject variability, the generalization of results of a single model to a population remains a concern. This study demonstrated that the pooled median of individual model results, similar to a probabilistic approach, can be used as an improved predictive tool in order to estimate the response of the lumbar spine.
Resumo:
In this paper we have used simulations to make a conjecture about the coverage of a t-dimensional subspace of a d-dimensional parameter space of size n when performing k trials of Latin Hypercube sampling. This takes the form P(k,n,d,t) = 1 - e^(-k/n^(t-1)). We suggest that this coverage formula is independent of d and this allows us to make connections between building Populations of Models and Experimental Designs. We also show that Orthogonal sampling is superior to Latin Hypercube sampling in terms of allowing a more uniform coverage of the t-dimensional subspace at the sub-block size level. These ideas have particular relevance when attempting to perform uncertainty quantification and sensitivity analyses.
Resumo:
In this paper we provide estimates for the coverage of parameter space when using Latin Hypercube Sampling, which forms the basis of building so-called populations of models. The estimates are obtained using combinatorial counting arguments to determine how many trials, k, are needed in order to obtain specified parameter space coverage for a given value of the discretisation size n. In the case of two dimensions, we show that if the ratio (Ø) of trials to discretisation size is greater than 1, then as n becomes moderately large the fractional coverage behaves as 1-exp-ø. We compare these estimates with simulation results obtained from an implementation of Latin Hypercube Sampling using MATLAB.
Resumo:
A pulsewidth modulation (PWM) technique is proposed for minimizing the rms torque ripple in inverter-fed induction motor drives subject to a given average switching frequency of the inverter. The proposed PWM technique is a combination of optimal continuous modulation and discontinuous modulation. The proposed technique is evaluated both theoretically as well as experimentally and is compared with well-known PWM techniques. It is shown that the proposed method reduces the rms torque ripple by about 30% at the rated speed of the motor drive, compared to conventional space vector PWM.
Resumo:
A general derivation of the coupling constant relations which result on embedding a non-simple group like SU L (2) @ U(1) in a larger simple group (or graded Lie group) is given. It is shown that such relations depend only on the requirement (i) that the multiplet of vector fields form an irreducible representation of the unifying algebra and (ii) the transformation properties of the fermions under SU L (2). This point is illustrated in two ways, one by constructing two different unification groups containing the same fermions and therefore have same Weinberg angle; the other by putting different SU L (2) structures on the same fermions and consequently have different Weinberg angles. In particular the value sin~0=3/8 is characteristic of the sequential doublet models or models which invoke a large number of additional leptons like E 6, while addition of extra charged fermion singlets can reduce the value of sin ~ 0 to 1/4. We point out that at the present time the models of grand unification are far from unique.
Resumo:
The microcommands constituting the microprogram of the control memory of a microprogrammed processor can be partitioned into a number of disjoint sets. Some of these sets are then encoded to minimize the word width of the ROM storing the microprogram. A further reduction in the width of the ROM words can be achieved by a technique known as bit steering where one or more bits are shared by two or more sets of microcommands. These sets are called the steerable sets. This correspondence presents a simple method for the detection and encoding of steerable sets. It has been shown that the concurrency matrix of two steerable sets exhibits definite patterns of clusters which can be easily recognized. A relation "connection" has been defined which helps in the detection of three-set steerability. Once steerable sets are identified, their encoding becomes a straightforward procedure following the location of the identifying clusters on the concurrency matrix or matrices.
Resumo:
A simple yet efficient method for the minimization of incompletely specified sequential machines (ISSMs) is proposed. Precise theorems are developed, as a consequence of which several compatibles can be deleted from consideration at the very first stage in the search for a minimal closed cover. Thus, the computational work is significantly reduced. Initial cardinality of the minimal closed cover is further reduced by a consideration of the maximal compatibles (MC's) only; as a result the method converges to the solution faster than the existing procedures. "Rank" of a compatible is defined. It is shown that ordering the compatibles, in accordance with their rank, reduces the number of comparisons to be made in the search for exclusion of compatibles. The new method is simple, systematic, and programmable. It does not involve any heuristics or intuitive procedures. For small- and medium-sized machines, it canle used for hand computation as well. For one of the illustrative examples used in this paper, 30 out of 40 compatibles can be ignored in accordance with the proposed rules and the remaining 10 compatibles only need be considered for obtaining a minimal solution.
Resumo:
A simple procedure for the state minimization of an incompletely specified sequential machine whose number of internal states is not very large is presented. It introduces the concept of a compatibility graph from which the set of maximal compatibles of the machine can be very conveniently derived. Primary and secondary implication trees associated with each maximal compatible are then constructed. The minimal state machine covering the incompletely specified machine is then obtained from these implication trees.
Resumo:
Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.