953 resultados para Discrete-continuous optimal control problems
Resumo:
The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.
Resumo:
The purpose of this study was to correct some mistakes in the literature and derive a necessary and sufficient condition for the MRL to follow the roller-coaster pattern of the corresponding failure rate function. It was also desired to find the conditions under which the discrete failure rate function has an upside-down bathtub shape if corresponding MRL function has a bathtub shape. The study showed that if discrete MRL has a bathtub shape, then under some conditions the corresponding failure rate function has an upside-down bathtub shape. Also the study corrected some mistakes in proofs of Tang, Lu and Chew (1999) and established a necessary and sufficient condition for the MRL to follow the roller-coaster pattern of the corresponding failure rate function. Similarly, some mistakes in Gupta and Gupta (2000) are corrected, with the ensuing results being expanded and proved thoroughly to establish the relationship between the crossing points of the failure rate and associated MRL functions. The new results derived in this study will be useful to model various lifetime data that occur in environmental studies, medical research, electronics engineering, and in many other areas of science and technology.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
The mixing regime of the upper 180 m of a mesoscale eddy in the vicinity of the Antarctic Polar Front at 47° S and 21° E was investigated during the R.V. Polarstern cruise ANT-XVIII/2 within the scope of the iron fertilization experiment EisenEx. On the basis of hydrographic CTD and ADCP profiles we deduced the vertical diffusivity Kz from two different parameterizations. Since these parameterizations bear the character of empirical functions, based on theoretical and idealized assumptions, they were inter alia compared with Cox-number and Thorpe-scale related diffusivities deduced from microstructure measurements, which supplied the first direct insights into turbulence of this ocean region. Values of Kz in the range of 10**-4 - 10**-3 m**2/s appear as a rather robust estimate of vertical diffusivity within the seasonal pycnocline. Values in the mixed layer above are more variable in time and reach 10**-1 m**2/s during periods of strong winds. The results confirm a close agreement between the microstructure-based eddy diffusivities and eddy diffusivities calculated after the parameterization of Pacanowski and Philander [1981, Journal of Physical Oceanography 11, 1443-1451, doi:10.1175/1520-0485(1981)011<1443:POVMIN>2.0.CO;2].
Resumo:
Wir betrachten zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängi- gen Gebieten, wobei die Bewegung des Gebietsrandes bekannt ist. Die zeitliche Entwicklung des Gebietes wird durch die ALE-Formulierung behandelt, die die Nachteile der klassischen Euler- und Lagrange-Betrachtungsweisen behebt. Die Position des Randes und seine Geschwindigkeit werden dabei so in das Gebietsinnere fortgesetzt, dass starke Gitterdeformationen verhindert werden. Als Zeitdiskretisierungen höherer Ordnung werden stetige Galerkin-Petrov-Verfahren (cGP) und unstetige Galerkin-Verfahren (dG) auf Probleme in zeitabhängigen Gebieten angewendet. Weiterhin werden das C 1 -stetige Galerkin-Petrov-Verfahren und das C 0 -stetige Galerkin- Verfahren vorgestellt. Deren Lösungen lassen sich auch in zeitabhängigen Gebieten durch ein einfaches einheitliches Postprocessing aus der Lösung des cGP-Problems bzw. dG-Problems erhalten. Für Problemstellungen in festen Gebieten und mit zeitlich konstanten Konvektions- und Reaktionstermen werden Stabilitätsresultate sowie optimale Fehlerabschätzungen für die nachbereiteten Lösungen der cGP-Verfahren und der dG-Verfahren angegeben. Für zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten präsentieren wir konservative und nicht-konservative Formulierungen, wobei eine besondere Aufmerksamkeit der Behandlung der Zeitableitung und der Gittergeschwindigkeit gilt. Stabilität und optimale Fehlerschätzungen für die in der Zeit semi-diskretisierten konservativen und nicht-konservativen Formulierungen werden vorgestellt. Abschließend wird das volldiskretisierte Problem betrachtet, wobei eine Finite-Elemente-Methode zur Ortsdiskretisierung der Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten im ALE-Rahmen einbezogen wurde. Darüber hinaus wird eine lokale Projektionsstabilisierung (LPS) eingesetzt, um der Konvektionsdominanz Rechnung zu tragen. Weiterhin wird numerisch untersucht, wie sich die Approximation der Gebietsgeschwindigkeit auf die Genauigkeit der Zeitdiskretisierungsverfahren auswirkt.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
The aim of this paper is to provide an efficient control design technique for discrete-time positive periodic systems. In particular, stability, positivity and periodic invariance of such systems are studied. Moreover, the concept of periodic invariance with respect to a collection of boxes is introduced and investigated with connection to stability. It is shown how such concept can be used for deriving a stabilizing state-feedback control that maintains the positivity of the closed-loop system and respects states and control signals constraints. In addition, all the proposed results can be efficiently solved in terms of linear programming.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
A Refinaria de Matosinhos é um dos complexos industriais da Galp Energia. A sua estação de tratamento de águas residuais industriais (ETARI) – designada internamente por Unidade 7000 – é composta por quatro tratamentos: o pré-tratamento, o tratamento físico-químico, o tratamento biológico e o pós-tratamento. Dada a interligação existente, é fundamental a otimização de cada um dos tratamentos. Este trabalho teve como objetivos a identificação dos problemas e/ou possibilidades de melhoria do pré-tratamento, tratamento físico-químico e pós-tratamento e principalmente a otimização do tratamento biológico da ETARI. No pré-tratamento verificou-se que a separação de óleos e lamas não era eficaz uma vez que se formam emulsões destas duas fases. Como solução, sugeriu-se a adição de agentes desemulsionantes, que se revelou economicamente inviável. Assim, sugeriu-se como alternativa o recurso a técnicas de tratamento da emulsão gerada, tais como a extração com solvente, centrifugação, ultrassons e micro-ondas. No tratamento físico-químico constatou-se que o controlo da unidade de saturação de ar na água era feito com base na análise visual dos operadores, o que pode conduzir a condições de operação afastadas das ótimas para este tratamento. Assim, sugeriu-se a realização de um estudo de otimização desta unidade com vista à determinação da razão ar/sólidos ótima para este efluente. Para além disto, constatou-se, ainda, que os consumos de coagulante aumentaram cerca de -- % no último ano, pelo que foi sugerido o estudo da viabilidade do processo de eletrocoagulação como substituto do sistema de coagulação existente. No pós-tratamento identificou-se o processo de lavagem dos filtros como sendo a etapa com possibilidade de ser otimizada. Através de um estudo preliminar concluiu-se que a lavagem contínua de um filtro por cada turno melhorava o desempenho dos mesmos. Constatou-se, ainda, que a introdução de ar comprimido na água de lavagem promove uma maior remoção de detritos do leito de areia, no entanto esta prática parece influenciar negativamente o desempenho dos filtros. No caso do tratamento biológico, identificaram-se problemas ao nível do tempo de retenção hidráulico do tratamento biológico II, que apresentou elevada variabilidade. Apesar de identificado concluiu-se que este problema era de difícil solução. Verificou-se, também, que o oxigénio dissolvido não era monitorizado, pelo que se sugeriu a instalação de uma sonda de oxigénio dissolvido numa zona de baixa turbulência do tanque de arejamento. Concluiu-se que o oxigénio era distribuído de forma homogénea por todo o tanque de arejamento e tentou-se identificar quais os fatores que influenciariam este parâmetro, no entanto, dada a elevada variabilidade do efluente e das condições de tratamento, tal não foi possível. Constatou-se, também, que o doseamento de fosfato para o tratamento biológico II era pouco eficiente já Otimização dos sistemas biológicos e melhorias nos tratamentos da ETARI da Refinaria de Matosinhos que em -- % dos dias se verificaram níveis baixos de fosfato no licor misto (< - mg/L). Foi, por isso, proposta a alteração do atual sistema de doseamento por gravidade para um sistema de bomba doseadora. Para além disso verificou-se que os consumos deste nutriente aumentaram significativamente no último ano (cerca de --%), situação que se constatou estar relacionada com um aumento da população microbiana para este período. Foi possível relacionar-se o aparecimento frequente de lamas à superfície dos decantadores secundários com incrementos repentinos de condutividade, pelo que se sugeriu o armazenamento do efluente nas bacias de tempestade, nestas situações. Verificou-se que a remoção de azoto era praticamente ineficaz uma vez que a conversão de azoto amoniacal em nitratos foi muito baixa. Assim, sugeriu-se o recurso à técnica de bio-augmentação ou a transformação do sistema de lamas ativadas num sistema bietápico. Por fim, constatou-se que a temperatura do efluente à entrada da ETARI apresenta valores bastante elevados para o tratamento biológico (aproximadamente de --º C) pelo que se sugeriu a instalação de uma sonda de temperatura no tanque de arejamento de modo a controlar de forma mais eficaz a temperatura do licor misto. Ainda no que diz respeito ao tratamento biológico, foi possível desenvolver-se um conjunto de ferramentas que visaram o funcionamento otimizado deste tratamento. Nesse sentido, foram apresentadas várias sugestões de melhoria: a utilização do índice volumétrico de lamas como indicador da qualidade das lamas em alternativa à percentagem de lamas; foi desenvolvido um conjunto de fluxogramas para a orientação dos operadores de exterior na resolução de problemas; foi criada uma “janela de operação” que pretende ser um guia de apoio à operação; foi ainda proposta a monitorização frequente da idade das lamas e da razão alimento/microrganismo.
Resumo:
Electrical neuromodulation of lumbar segments improves motor control after spinal cord injury in animal models and humans. However, the physiological principles underlying the effect of this intervention remain poorly understood, which has limited the therapeutic approach to continuous stimulation applied to restricted spinal cord locations. Here we developed stimulation protocols that reproduce the natural dynamics of motoneuron activation during locomotion. For this, we computed the spatiotemporal activation pattern of muscle synergies during locomotion in healthy rats. Computer simulations identified optimal electrode locations to target each synergy through the recruitment of proprioceptive feedback circuits. This framework steered the design of spatially selective spinal implants and real-time control software that modulate extensor and flexor synergies with precise temporal resolution. Spatiotemporal neuromodulation therapies improved gait quality, weight-bearing capacity, endurance and skilled locomotion in several rodent models of spinal cord injury. These new concepts are directly translatable to strategies to improve motor control in humans.
Resumo:
Crystallization is employed in different industrial processes. The method and operation can differ depending on the nature of the substances involved. The aim of this study is to examine the effect of various operating conditions on the crystal properties in a chemical engineering design window with a focus on ultrasound assisted cooling crystallization. Batch to batch variations, minimal manufacturing steps and faster production times are factors which continuous crystallization seeks to resolve. Continuous processes scale-up is considered straightforward compared to batch processes owing to increase of processing time in the specific reactor. In cooling crystallization process, ultrasound can be used to control the crystal properties. Different model compounds were used to define the suitable process parameters for the modular crystallizer using equal operating conditions in each module. A final temperature of 20oC was employed in all experiments while the operating conditions differed. The studied process parameters and configuration of the crystallizer were manipulated to achieve a continuous operation without crystal clogging along the crystallization path. The results from the continuous experiment were compared with the batch crystallization results and analysed using the Malvern Morphologi G3 instrument to determine the crystal morphology and CSD. The modular crystallizer was operated successfully with three different residence times. At optimal process conditions, a longer residence time gives smaller crystals and narrower CSD. Based on the findings, at a constant initial solution concentration, the residence time had clear influence on crystal properties. The equal supersaturation criterion in each module offered better results compared to other cooling profiles. The combination of continuous crystallization and ultrasound has large potential to overcome clogging, obtain reproducible and narrow CSD, specific crystal morphologies and uniform particle sizes, and exclusion of milling stages in comparison to batch processes.
Resumo:
This article is concerned with the numerical detection of bifurcation points of nonlinear partial differential equations as some parameter of interest is varied. In particular, we study in detail the numerical approximation of the Bratu problem, based on exploiting the symmetric version of the interior penalty discontinuous Galerkin finite element method. A framework for a posteriori control of the discretization error in the computed critical parameter value is developed based upon the application of the dual weighted residual (DWR) approach. Numerical experiments are presented to highlight the practical performance of the proposed a posteriori error estimator.
Resumo:
In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.