952 resultados para Nonhomogeneous initial-boundary-value problems
Resumo:
This paper presents some initial attempts to mathematically model the dynamics of a continuous estimation of distribution algorithm (EDA) based on a Gaussian distribution and truncation selection. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed technique and explore some important properties of the EDA. With some general assumptions, we show that, for ID unimodal problems and with the (mu, lambda) scheme: (1). The behaviour of the EDA is dependent only on the general shape of the test function, rather than its specific form; (2). When initialized far from the global optimum, the EDA has a tendency to converge prematurely; (3). Given a certain selection pressure, there is a unique value for the proposed amplification parameter that could help the EDA achieve desirable performance; for ID multimodal problems: (1). The EDA could get stuck with the (mu, lambda) scheme; (2). The EDA will never get stuck with the (mu, lambda) scheme.
Resumo:
Aquifers are a vital water resource whose quality characteristics must be safeguarded or, if damaged, restored. The extent and complexity of aquifer contamination is related to characteristics of the porous medium, the influence of boundary conditions, and the biological, chemical and physical processes. After the nineties, the efforts of the scientists have been increased exponentially in order to find an efficient way for estimating the hydraulic parameters of the aquifers, and thus, recover the contaminant source position and its release history. To simplify and understand the influence of these various factors on aquifer phenomena, it is common for researchers to use numerical and controlled experiments. This work presents some of these methods, applying and comparing them on data collected during laboratory, field and numerical tests. The work is structured in four parts which present the results and the conclusions of the specific objectives.
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.
Resumo:
This paper is based upon the initial findings of a CIMA research project into the way in which corporate performance measurement systems are influenced by the use of shareholder value management techniques. It compares and contrasts the techniques in use in a sample of 10 companies that either explicitly use shareholder value techniques also known as Value-Based Management (VBM), or explicitly do not use such techniques. The analysis undertaken is based upon the finding of semi-structured interviews with company representatives which formed the first part of the data collection process of the project. The analysis traces the interactions between corporate objectives, decision making criteria, performance measurement systems and executive incentive schemes in order to develop an understanding of the effects of such shareholder value techniques upon corporate behaviour. The literature reviewed suggests that the other aspects of the planning and control system should be aligned with the corporate objectives whether a company has adopted VBM or not. Therefore this research contributes new evidence on the use of VBM techniques in the UK and also more generally on whether VBM and non-VBM companies internal planning and control systems are aligned.
Resumo:
There has been a revival of interest in economic techniques to measure the value of a firm through the use of economic value added as a technique for measuring such value to shareholders. This technique, based upon the concept of economic value equating to total value, is founded upon the assumptions of classical liberal economic theory. Such techniques have been subject to criticism both from the point of view of the level of adjustment to published accounts needed to make the technique work and from the point of view of the validity of such techniques in actually measuring value in a meaningful context. This paper critiques economic value added techniques as a means of calculating changes in shareholder value, contrasting such techniques with more traditional techniques of measuring value added. It uses the company Severn Trent plc as an actual example in order to evaluate and contrast the techniques in action. The paper demonstrates discrepancies between the calculated results from using economic value added analysis and those reported using conventional accounting measures. It considers the merits of the respective techniques in explaining shareholder and managerial behaviour and the problems with using such techniques in considering the wider stakeholder concept of value. It concludes that this economic value added technique has merits when compared with traditional accounting measures of performance but that it does not provide the universal panacea claimed by its proponents.
Resumo:
A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.
Resumo:
In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Resumo:
This thesis describes a project which has investigated the evaluation of information systems. The work took place in, and is related to, a specific organisational context, that of the National Health Service (NHS). It aims to increase understanding of the evaluation which takes place in the service and the way in which this is affected by the NHS environment. It also investigates the issues which surround some important types of evaluation and their use in this context. The first stage of the project was a postal survey in which respondents were asked to describe the evaluation which took place in their authorities and to give their opinions about it. This was used to give an overview of the practice of IS evaluation in the NHS and to identify its uses and the problems experienced. Three important types of evaluation were then examined in more detail by means of action research studies. One of these dealt with the selection and purchase of a large hospital information system. The study took the form of an evaluation of the procurement process, and examined the methods used and the influence of organisational factors. The other studies are concerned with post-implementation evaluation, and examine the choice of an evaluation approach as well as its application. One was an evaluation of a community health system which had been operational for some time but was of doubtful value, and suffered from a number of problems. The situation was explored by means of a study of the costs and benefits of the system. The remaining study was the initial review of a system which was used in the administration of a Breast Screening Service. The service itself was also newly operational and the relationship between the service and the system was of interest.
Resumo:
This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
We propose and investigate a method for the stable determination of a harmonic function from knowledge of its value and its normal derivative on a part of the boundary of the (bounded) solution domain (Cauchy problem). We reformulate the Cauchy problem as an operator equation on the boundary using the Dirichlet-to-Neumann map. To discretize the obtained operator, we modify and employ a method denoted as Classic II given in [J. Helsing, Faster convergence and higher accuracy for the Dirichlet–Neumann map, J. Comput. Phys. 228 (2009), pp. 2578–2576, Section 3], which is based on Fredholm integral equations and Nyström discretization schemes. Then, for stability reasons, to solve the discretized integral equation we use the method of smoothing projection introduced in [J. Helsing and B.T. Johansson, Fast reconstruction of harmonic functions from Cauchy data using integral equation techniques, Inverse Probl. Sci. Eng. 18 (2010), pp. 381–399, Section 7], which makes it possible to solve the discretized operator equation in a stable way with minor computational cost and high accuracy. With this approach, for sufficiently smooth Cauchy data, the normal derivative can also be accurately computed on the part of the boundary where no data is initially given.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional parabolic inverse Cauchy–Stefan problem, where boundary data and the initial condition are to be determined from the Cauchy data prescribed on a given moving interface. In [B.T. Johansson, D. Lesnic, and T. Reeve, A method of fundamental solutions for the one-dimensional inverse Stefan Problem, Appl. Math Model. 35 (2011), pp. 4367–4378], the inverse Stefan problem was considered, where only the boundary data is to be reconstructed on the fixed boundary. We extend the MFS proposed in Johansson et al. (2011) and show that the initial condition can also be simultaneously recovered, i.e. the MFS is appropriate for the inverse Cauchy-Stefan problem. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be efficiently obtained with small computational cost.
Resumo:
In this paper, free surface problems of Stefan-type for the parabolic heat equation are investigated using the method of fundamental solutions. The additional measurement necessary to determine the free surface could be a boundary temperature, a heat flux or an energy measurement. Both one- and two-phase flows are investigated. Numerical results are presented and discussed.
Resumo:
We consider a Cauchy problem for the heat equation, where the temperature field is to be reconstructed from the temperature and heat flux given on a part of the boundary of the solution domain. We employ a Landweber type method proposed in [2], where a sequence of mixed well-posed problems are solved at each iteration step to obtain a stable approximation to the original Cauchy problem. We develop an efficient boundary integral equation method for the numerical solution of these mixed problems, based on the method of Rothe. Numerical examples are presented both with exact and noisy data, showing the efficiency and stability of the proposed procedure and approximations.