961 resultados para Applied Mathematics|Computer Engineering|Computer science
Resumo:
CXTANNEAL is a program for analysing contaminant transport in soils. The code, written in Fortran 77, is a modified version of CXTFIT, a commonly used package for estimating solute transport parameters in soils. The improvement of the present code is that it includes simulated annealing as the optimization technique for curve fitting. Tests with hypothetical data show that CXTANNEAL performs better than the original code in searching for optimal parameter estimates. To reduce the computational time, a parallel version of CXTANNEAL (CXTANNEAL_P) was also developed. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper is devoted to the problems of finding the load flow feasibility, saddle node, and Hopf bifurcation boundaries in the space of power system parameters. The first part contains a review of the existing relevant approaches including not-so-well-known contributions from Russia. The second part presents a new robust method for finding the power system load flow feasibility boundary on the plane defined by any three vectors of dependent variables (nodal voltages), called the Delta plane. The method exploits some quadratic and linear properties of the load now equations and state matrices written in rectangular coordinates. An advantage of the method is that it does not require an iterative solution of nonlinear equations (except the eigenvalue problem). In addition to benefits for visualization, the method is a useful tool for topological studies of power system multiple solution structures and stability domains. Although the power system application is developed, the method can be equally efficient for any quadratic algebraic problem.
Resumo:
In this paper, a new v-metric based approach is proposed to design decentralized controllers for multi-unit nonlinear plants that admit a set of plant decompositions in an operating space. Similar to the gap metric approach in literature, it is shown that the operating space can also be divided into several subregions based on a v-metric indicator, and each of the subregions admits the same controller structure. A comparative case study is presented to display the advantages of proposed approach over the gap metric approach. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The simultaneous design of the steady-state and dynamic performance of a process has the ability to satisfy much more demanding dynamic performance criteria than the design of dynamics only by the connection of a control system. A method for designing process dynamics based on the use of a linearised systems' eigenvalues has been developed. The eigenvalues are associated with system states using the unit perturbation spectral resolution (UPSR), characterising the dynamics of each state. The design method uses a homotopy approach to determine a final design which satisfies both steady-state and dynamic performance criteria. A highly interacting single stage forced circulation evaporator system, including control loops, was designed by this method with the goal of reducing the time taken for the liquid composition to reach steady-state. Initially the system was successfully redesigned to speed up the eigenvalue associated with the liquid composition state, but this did not result in an improved startup performance. Further analysis showed that the integral action of the composition controller was the source of the limiting eigenvalue. Design changes made to speed up this eigenvalue did result in an improved startup performance. The proposed approach provides a structured way to address the design-control interface, giving significant insight into the dynamic behaviour of the system such that a systematic design or redesign of an existing system can be undertaken with confidence.
Resumo:
A case sensitive intelligent model editor has been developed for constructing consistent lumped dynamic process models and for simplifying them using modelling assumptions. The approach is based on a systematic assumption-driven modelling procedure and on the syntax and semantics of process,models and the simplifying assumptions.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
Peptides that induce and recall T-cell responses are called T-cell epitopes. T-cell epitopes may be useful in a subunit vaccine against malaria. Computer models that simulate peptide binding to MHC are useful for selecting candidate T-cell epitopes since they minimize the number of experiments required for their identification. We applied a combination of computational and immunological strategies to select candidate T-cell epitopes. A total of 86 experimental binding assays were performed in three rounds of identification of HLA-All binding peptides from the six preerythrocytic malaria antigens. Thirty-six peptides were experimentally confirmed as binders. We show that the cyclical refinement of the ANN models results in a significant improvement of the efficiency of identifying potential T-cell epitopes. (C) 2001 by Elsevier Science Inc.
Resumo:
Environmental processes have been modelled for decades. However. the need for integrated assessment and modeling (IAM) has,town as the extent and severity of environmental problems in the 21st Century worsens. The scale of IAM is not restricted to the global level as in climate change models, but includes local and regional models of environmental problems. This paper discusses various definitions of IAM and identifies five different types of integration that Lire needed for the effective solution of environmental problems. The future is then depicted in the form of two brief scenarios: one optimistic and one pessimistic. The current state of IAM is then briefly reviewed. The issues of complexity and validation in IAM are recognised as more complex than in traditional disciplinary approaches. Communication is identified as a central issue both internally among team members and externally with decision-makers. stakeholders and other scientists. Finally it is concluded that the process of integrated assessment and modelling is considered as important as the product for any particular project. By learning to work together and recognise the contribution of all team members and participants, it is believed that we will have a strong scientific and social basis to address the environmental problems of the 21st Century. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Field and laboratory observations have shown that a relatively low beach groundwater table enhances beach accretion. These observations have led to the beach dewatering technique (artificially lowering the beach water table) for combating beach erosion. Here we present a process-based numerical model that simulates the interacting wave motion on the beach. coastal groundwater flow, swash sediment transport and beach profile changes. Results of model simulations demonstrate that the model replicates accretionary effects of a low beach water table on beach profile changes and has the potential to become a tool for assessing the effectiveness of beach dewatering systems. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.
Resumo:
The paper presents a theory for modeling flow in anisotropic, viscous rock. This theory has originally been developed for the simulation of large deformation processes including the folding and kinking of multi-layered visco-elastic rock (Muhlhaus et al. [1,2]). The orientation of slip planes in the context of crystallographic slip is determined by the normal vector - the director - of these surfaces. The model is applied to simulate anisotropic mantle convection. We compare the evolution of flow patterns, Nusselt number and director orientations for isotropic and anisotropic rheologies. In the simulations we utilize two different finite element methodologies: The Lagrangian Integration Point Method Moresi et al [8] and an Eulerian formulation, which we implemented into the finite element based pde solver Fastflo (www.cmis.csiro.au/Fastflo/). The reason for utilizing two different finite element codes was firstly to study the influence of an anisotropic power law rheology which currently is not implemented into the Lagrangian Integration point scheme [8] and secondly to study the numerical performance of Eulerian (Fastflo)- and Lagrangian integration schemes [8]. It turned out that whereas in the Lagrangian method the Nusselt number vs time plot reached only a quasi steady state where the Nusselt number oscillates around a steady state value the Eulerian scheme reaches exact steady states and produces a high degree of alignment (director orientation locally orthogonal to velocity vector almost everywhere in the computational domain). In the simulations emergent anisotropy was strongest in terms of modulus contrast in the up and down-welling plumes. Mechanisms for anisotropic material behavior in the mantle dynamics context are discussed by Christensen [3]. The dominant mineral phases in the mantle generally do not exhibit strong elastic anisotropy but they still may be oriented by the convective flow. Thus viscous anisotropy (the main focus of this paper) may or may not correlate with elastic or seismic anisotropy.
Resumo:
Taking functional programming to its extremities in search of simplicity still requires integration with other development (e.g. formal) methods. Induction is the key to deriving and verifying functional programs, but can be simplified through packaging proofs with functions, particularly folds, on data (structures). Totally Functional Programming avoids the complexities of interpretation by directly representing data (structures) as platonic combinators - the functions characteristic to the data. The link between the two simplifications is that platonic combinators are a kind of partially-applied fold, which means that platonic combinators inherit fold-theoretic properties, but with some apparent simplifications due to the platonic combinator representation. However, despite observable behaviour within functional programming that suggests that TFP is widely-applicable, significant work remains before TFP as such could be widely adopted.