983 resultados para Heuristic methods
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.
A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.
To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.
An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.
The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.
A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.
The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.
Resumo:
Spurious reflection is one of the troublesome problems in phase-shifting interferometry. This paper deals with the problem on the basis of a two-run-times-two-frame phase-shift algorithm, in which the phase shifts are shared out between the reference beam and the object beam. The effect of spurious reflection on phase measurement is investigated; two simple methods for removal of the effect are presented and each needs only six interferograms. Two other solutions to the spurious reflection problem are also reviewed. The simulation results obtained using these four solutions are compared. The influence of a mix of phase-shifter miscalibration and spurious reflection on phase measurement is also discussed.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.
Resumo:
The first bilateral study of methods of biological sampling and biological methods of water quality assessment took place during June 1977 on selected sampling sites in the catchment of the River Trent (UK). The study was arranged in accordance with the protocol established by the joint working group responsible for the Anglo-Soviet Environmental Agreement. The main purpose of the bilateral study in Nottingham was for some of the methods of sampling and biological assessment used by UK biologists to be demonstrated to their Soviet counterparts and for the Soviet biologists to have the opportunity to test these methods at first hand in order to judge the potential of any of these methods for use within the Soviet Union. This paper is concerned with the nine river stations in the Trent catchment.
Resumo:
In recent collaborative biological sampling exercises organised by the Nottingham Regional Laboratory of the Severn-Trent Water Authority, the effect of handnet sampling variation on the quality and usefulness of the data obtained has been questioned, especially when this data is transcribed into one or more of the commonly used biological methods of water quality assessment. This study investigates if this effect is constant at sites with similar typography but differing water quality states when the sampling method is standardized and carried out by a single operator. An argument is made for the use of a lowest common denominator approach to give a more consistent result and obviate the effect of sampling variation on these biological assessment methods.
Resumo:
Guided by experience and the theoretical development of hydrobiology, it can be considered that the main aim of water quality control should be the establishment of the rates of the self-purification process of water bodies which are capable of maintaining communities in a state of dynamic balance without changing the integrity of the ecosystem. Hence, general approaches in the elaboration of methods for hydrobiological control are based on the following principles: a. the balance of matter and energy in water bodies; b. the integrity of the ecosystem structure and of its separate components at all levels. Ecosystem analysis makes possible a revelation of the whole totality of factors which determine the anthropogenic evolution of a water body. This is necessary for the study of long-term changes in water bodies. The principles of ecosystem analysis of water bodies, together with the creation of their mathematical models, are important because, in future, with the transition of water demanding production into closed cycles of water supply, changes in water bodies will arise in the main through the influence of 'diffuse' pollution (from the atmosphere, with utilisation in transport etc.).
Resumo:
In the study of questions relating to the quality of raw water and the biological produc- tivity of water bodies algal indicators have an important place. Despite the importance of these functional indicators in determining the quality of water and the nature of the production processes as a basis for preserving the ecological equilibrium of aquatic ecosystems, their use in the system of hydrobiological methods of monitoring the quality of surface water has not received proper consideration. This paper aims to analyse the matter and the possibl use of functional algal criteria in the system for the biological monitoring of aquatic objects and also to give some results in using these criteria.