929 resultados para Optimal control problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

EuroPES 2009

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diseases and parasitic problems could constitute significant economic losses in fish production if not controlled, thus the need to continue monitoring its prevalence. Based on field studies on feral and intensively raised fish at the Kainji Lake Research Institute Nigeria, some diseases and parasitic problems have been identified. These include; helminthiasis; fungal disease; protozoa which include Myxosoma sp., Myxobolus spp., Henneguya sp., Trichodina sp., Ichthopthrius sp. bacterial mainly Aeromonas sp., Pseudomonas sp., mechanical injuries; death due to unknown causes and economic assessment of myxosporidian infection. Suggestion for disease control in fish production are recommended

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Community Based Resource Management (CBRM) understood as an approach emphasizes a community's capability, responsibility and accountability with regards to managing resources. Based on the recommendations for the Nigerian-German Kainji Lake Fisheries Promotion Project (KLFPP), the Niger and Kebbi States Fisheries Edicts were promulgated in 1997. These edicts, among other things, banned the use of beach seines. Given the conviction of KLFPP, that if communities whose livelihood is linked to the fishery, understand and identify the problems and by consensus agree to the solutions of fisheries problems, they are more likely to adhere to any control measures, specifically the ban on beach seine. In 1999 a first agreement was reached between beach seiners, non-beach seiners and government authorities leading to an almost complete elimination of beach seine on the Lake Kainji. However, despite on going efforts of the Kainji Lake Fisheries Management and Conservation Unit in 2000 and possibly because of certain oversights during and after the first agreement, in May 2001 a significant number of beach seiners was observed. This led to a re-assessment of our approach, which lately culminated into another round of negotiation. The paper presents the latest results on this on-going process

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fish cage culture is a rapid aquacultural practice of producing fish with more yield compared to traditional pond culture. Several species cultured by this method include Cyprinus carpio, Orechromis niloticus, Sarotherodon galilaeus, Tilapia zilli, Clarias lazera, C. gariepinus, Heterobranchus bidorsalis, Citharinus citharus, Distochodus rostratus and Alestes dentes. However, the culture of fish in cages has some problems that are due to mechanical defects of the cage or diseases due to infection. The mechanical problems which may lead to clogged net, toxicity and easy access by predators depend on defects associated with various types of nets which include fold sieve cloth net, wire net, polypropylene net, nylon, galvanized and welded net. The diseases problems are of two types namely introduced diseases due to parasites. The introduced parasites include Crustaseans, Ergasilus sp. Argulus africana, and Lamprolegna sp, Helminth, Diplostomulum tregnna: Protozoan, Trichodina sp, Myxosoma sp, Myxobolus sp. the second disease problems are inherent diseases aggravated by the very rich nutrient environment in cages for rapid bacterial, saprophytic fungi, and phytoplanktonic bloom resulting in clogging of net, stagnation of water and low biological oxygen demand (BOD). The consequence is fish kill, prevalence of gill rot and dropsy conditions. Recommendations on routine cage hygiene, diagnosis and control procedures to reduce fish mortality are highlighted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently completing its fifth year, the Coastal Waccamaw Stormwater Education Consortium (CWSEC) helps northeastern South Carolina communities meet National Pollutant Discharge Elimination System (NPDES) Phase II permit requirements for Minimum Control Measure 1 - Public Education and Outreach - and Minimum Control Measure 2 - Public Involvement. Coordinated by Coastal Carolina University, six regional organizations serve as core education providers to eight coastal localities including six towns and cities and two large counties. CWSEC recently finished a needs assessment to begin the process of strategizing for the second NPDES Phase II 5-year permit cycle in order to continue to develop and implement effective, results-oriented stormwater education and outreach programs to meet federal requirements and satisfy local environmental and economic needs. From its conception in May 2004, CWSEC set out to fulfill new federal Clean Water Act requirements associated with the NPDES Phase II Stormwater Program. Six small municipal separate storm sewer systems (MS4s) located within the Myrtle Beach Urbanized Area endorsed a coordinated approach to regional stormwater education, and participated in a needs assessment resulting in a Regional Stormwater Education Strategy and a Phased Education Work Plan. In 2005, CWSEC was formally established and the CWSEC’s Coordinator was hired. The Coordinator, who is also the Environmental Educator at Coastal Carolina University’s Waccamaw Watershed Academy, organizes six regional agencies who serve as core education providers for eight coastal communities. The six regional agencies working as core education providers to the member MS4s include Clemson Public Service and Carolina Clear Program, Coastal Carolina University’s Waccamaw Watershed Academy, Murrells Inlet 2020, North Inlet-Winyah Bay National Estuarine Research Reserve’s Coastal Training and Public Education Programs, South Carolina Sea Grant Consortium, and Winyah Rivers Foundation’s Waccamaw Riverkeeper®. CWSEC’s organizational structure results in a synergy among the education providers, achieving greater productivity than if each provider worked separately. The member small MS4s include City of Conway, City of North Myrtle Beach, City of Myrtle Beach, Georgetown County, Horry County, Town of Atlantic Beach, Town of Briarcliffe Acres, and Town of Surfside Beach. Each MS4 contributes a modest annual fee toward the salary of the Coordinator and operational costs. (PDF contains 3 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An acoustic-optics programmable dispersive filter (AOPDF) was first employed to actively control the linearly polarized femtosecond pump pulse frequency chirp for supercontinuum (SC) generation in a high birefringence photonic crystal fiber (PCF). By accurately controlling the second order phase distortion and polarization direction of incident pulses, the output SC spectrum can be tuned to various spectral energy distributions and bandwidths. The pump pulse energy and bandwidth are preserved in our experiment. It is found that SC with broader bandwidth can be generated with positive chirped pump pulses except when the chirp value is larger than the optimal value, and the same optimal value exists for the pump pulses polarized along the two principal axes. With optimal positive chirp, more than 78% of the pump energy can be transferred to below 750 nm. Otherwise, negative chirp will weaken the blue-shift broadening and the SC bandwidth. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the peak intensity of single attosecond x-ray pulses is enhanced by 1 or 2 orders of magnitude, the pulse duration is greatly compressed, and the optimal propagation distance is shortened by genetic algorithm optimization of the chirp and initial phase of 5 fs laser pulses. However, as the laser intensity increases, more efficient nonadiabatic self-phase matching can lead to a dramatically enhanced harmonic yield, and the efficiency of optimization decreases in the enhancement and compression of the generated attosecond pulses. (c) 2006 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a civil engineering approach to active control for civil structures. The proposed control technique, termed Active Interaction Control (AIC), utilizes dynamic interactions between different structures, or components of the same structure, to reduce the resonance response of the controlled or primary structure under earthquake excitations. The primary control objective of AIC is to minimize the maximum story drift of the primary structure. This is accomplished by timing the controlled interactions so as to withdraw the maximum possible vibrational energy from the primary structure to an auxiliary structure, where the energy is stored and eventually dissipated as the external excitation decreases. One of the important advantages of AIC over most conventional active control approaches is the very low external power required.

In this thesis, the AIC concept is introduced and a new AIC algorithm, termed Optimal Connection Strategy (OCS) algorithm, is proposed. The efficiency of the OCS algorithm is demonstrated and compared with two previously existing AIC algorithms, the Active Interface Damping (AID) and Active Variable Stiffness (AVS) algorithms, through idealized examples and numerical simulations of Single- and Multi-Degree-of Freedom systems under earthquake excitations. It is found that the OCS algorithm is capable of significantly reducing the story drift response of the primary structure. The effects of the mass, damping, and stiffness of the auxiliary structure on the system performance are investigated in parametric studies. Practical issues such as the sampling interval and time delay are also examined. A simple but effective predictive time delay compensation scheme is developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.