928 resultados para variable structure control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the materials scale, thermoelectric efficiency is defined by the dimensionless figure of merit zT. This value is made up of three material components in the form zT = Tα2/ρκ, where α is the Seebeck coefficient, ρ is the electrical resistivity, and κ is the total thermal conductivity. Therefore, in order to improve zT would require the reduction of κ and ρ while increasing α. However due to the inter-relation of the electrical and thermal properties of materials, typical routes to thermoelectric enhancement come in one of two forms. The first is to isolate the electronic properties and increase α without negatively affecting ρ. Techniques like electron filtering, quantum confinement, and density of states distortions have been proposed to enhance the Seebeck coefficient in thermoelectric materials. However, it has been difficult to prove the efficacy of these techniques. More recently efforts to manipulate the band degeneracy in semiconductors has been explored as a means to enhance α.

The other route to thermoelectric enhancement is through minimizing the thermal conductivity, κ. More specifically, thermal conductivity can be broken into two parts, an electronic and lattice term, κe and κl respectively. From a functional materials standpoint, the reduction in lattice thermal conductivity should have a minimal effect on the electronic properties. Most routes incorporate techniques that focus on the reduction of the lattice thermal conductivity. The components that make up κl (κl = 1/3Cνl) are the heat capacity (C), phonon group velocity (ν), and phonon mean free path (l). Since the difficulty is extreme in altering the heat capacity and group velocity, the phonon mean free path is most often the source of reduction.

Past routes to decreasing the phonon mean free path has been by alloying and grain size reduction. However, in these techniques the electron mobility is often negatively affected because in alloying any perturbation to the periodic potential can cause additional adverse carrier scattering. Grain size reduction has been another successful route to enhancing zT because of the significant difference in electron and phonon mean free paths. However, grain size reduction is erratic in anisotropic materials due to the orientation dependent transport properties. However, microstructure formation in both equilibrium and nonequilibrium processing routines can be used to effectively reduce the phonon mean free path as a route to enhance the figure of merit.

This work starts with a discussion of several different deliberate microstructure varieties. Control of the morphology and finally structure size and spacing is discussed at length. Since the material example used throughout this thesis is anisotropic a short primer on zone melting is presented as an effective route to growing homogeneous and oriented polycrystalline material. The resulting microstructure formation and control is presented specifically in the case of In2Te3-Bi2Te3 composites and the transport properties pertinent to thermoelectric materials is presented. Finally, the transport and discussion of iodine doped Bi2Te3 is presented as a re-evaluation of the literature data and what is known today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 2-D SW-banyan network is introduced by properly folding the 1-D SW-banyan network, and its corresponding optical setup is proposed by means of polarizing beamsplitters and 2-D phase spatial light modulators. Then, based on the characteristics and the proposed optical setup, the control for the routing path between any source-destination pair is given, and the method to determine whether a given permutation is permissible or not is discussed. Because the proposed optical setup consists of only optical polarization elements, it is compact in structure, its corresponding energy loss and crosstalk are low, and its corresponding available number of channels is high. (C) 1996 Society of Photo-Optical Instrumentation Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O mercado de capitais brasileiro se caracteriza pela alta concentração de poder nas mãos de poucos acionistas controladores. No Brasil, a existência de ações preferenciais sem direito a voto enseja o surgimento de conflito de agência entre acionistas controladores e acionistas minoritários, agravado pelo fato de que o controle pode ser exercido com uma participação relativamente pequena sobre o total de ações emitidas pelas companhias. A concentração de propriedade permitiria a possibilidade de expropriação dos direitos dos minoritários. Diversos estudos empíricos vêm sendo realizados ao longo dos últimos anos com o objetivo de avaliar a influência da estrutura de propriedade das ações sobre o valor de mercado das companhias. Nesse contexto, o presente trabalho pretende trazer novas contribuições, com ênfase na participação de ações preferenciais na estrutura de propriedade. Neste trabalho, usando uma amostra de empresas de capital aberto negociadas na BM&FBOVESPA, a partir de teste de diferença de médias, rejeita-se a hipótese de igualdade de valor entre empresas que só possuem ações ON em sua estrutura de propriedade, em relação às que possuem ambos os tipos, ON e PN. Em continuidade, usando modelos de regressão linear, encontra-se relação negativa estatisticamente significativa entre valor de mercado das empresas e variável utilizada para caracterizar a estrutura de propriedade, especificamente, a diferença entre o percentual de participação dos acionistas não controladores no total de ações PN e o percentual de participação dos acionistas controladores no total de ações PN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tese tem o objetivo geral de investigar a associação entre estresse e acidentes no trabalho em funcionários técnico-administrativos efetivos de uma universidade pública no Rio de Janeiro por meio de modelos multiníveis. Para alcançar tal objetivo, a tese foi distribuída em dois artigos. O primeiro artigo investiga a associação entre estresse e acidentes no trabalho considerando componentes hierárquicos da estrutura dos dados por meio de modelos multiníveis com funcionários no primeiro nível agrupados em setores de trabalho no segundo nível. O segundo artigo investiga o comportamento dos coeficientes fixos e aleatórios dos modelos multiníveis com classificação cruzada entre setores de trabalho e grupos ocupacionais em relação aos modelos multiníveis que consideram apenas componentes hierárquicos dos setores de trabalho, ignorando o ajuste dos grupos ocupacionais. O estresse psicossocial no trabalho foi abordado a partir das relações entre alta demanda psicológica e baixo controle do processo laboral, Estas dimensões foram captadas por meio da versão resumida da escala Karasek, que também contém informações sobre o apoio social no trabalho. Dimensões isoladas do estresse no trabalho (demanda e controle), razão entre demanda psicológica e controle do trabalho (Razão D/C) e o apoio social no trabalho foram mensurados no nível individual e nos setores de trabalho. De modo geral, os resultados destacam a demanda psicológica mensurada no nível individual como um importante fator associado à ocorrência de acidentes de trabalho. O apoio social no trabalho, mensurado no nível individual e no setor de trabalho, apresentou associação inversa à prevalência de acidentes de trabalho, sendo, no setor, acentuada entre as mulheres. Os resultados também mostram que os parâmetros fixos dos modelos com e sem classificação cruzada foram semelhantes e que, de modo geral, os erros padrões (EP) foram um pouco maiores nos modelos com classificação cruzada, apesar deste comportamento do EP não ter sido observado quando relacionado aos coeficientes fixos das variáveis agregadas no setor de trabalho. A maior distinção entre as duas abordagens foi observada em relação aos coeficientes aleatórios relacionados aos setores de trabalho, que alteraram substancialmente após ajustar o efeito da ocupação por meio dos modelos com classificação cruzada. Este estudo reforça a importância de características psicossociais na ocorrência de acidentes de trabalho e contribui para o conhecimento dessas relações a partir de abordagens analíticas que refinam a captação da estrutura de dependência dos indivíduos em seu ambiente de trabalho. Sugere-se a realização de outros estudos com metodologia similar, que permitam aprofundar o conhecimento sobre estresse e acidentes no trabalho.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES]En este documento se exponen los resultados de las diferentes funcionalidades analizadas del variador de frecuencia Sinamics G120. El dispositivo se ha configurado mediante el software de control TIA Portal para la realización de tareas de control de velocidad de un motor de inducción, así como el control de posición del eje del motor y de un eje lineal acoplado al mismo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this paper is to determine to what extent globalization pressures are changing interlocking directorate networks modeled on continental capitalism into Anglo-Saxon models. For this purpose we analyse the Spanish network of interlocks, comparing the present structure (2012) with that of 1993 and 2006. We show how, although Spanish corporative structure continues to display characteristics of the continental economies, some major banks are significantly reducing industrial activity. Nevertheless, the financial organizations continue to maintain a close relationship with sectors such as construction and services. The analysis of the network of directorates shows a retreat in activity of industrial banking in Spain. Two large Spanish financial institutions, BSCH and La Caixa, continue to undertake activities of industrial banking in 2006, but this activity is significantly reduced in 2012. According to the theories on the role of the interlocking directorates, the companies in these sectors assure their access to banking credit by incorporating advisors from financial organizations to their board of directors. We cannot conclude that the structure of the Spanish corporate network has become a new case of Anglo-Saxon structure, but we got indications that are becoming less hierarchic as banks seem to slowly abandon centrality positions. This is especially salient if we compare the networks of 2006 and 2012, which show a continuing decrease of the role of banks and insurance companies in the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ternary CoNiP nanowire (NW) arrays have been synthesized by electrochemical deposition inside the nanochannels of anodic aluminum oxide (AAO) template. The CoNiP NWs deposited at room temperature present soft magnetic properties, with both parallel and perpendicular coercivities less than 500 Oe. In contrast, as the electrolyte temperature (T-elc) increases from 323 to 343 K, the NWs exhibit hard magnetic properties with coercivities in the range of 1000-2500 Oe. This dramatic increase in coercivities can be attributed to the domain wall pinning that is related to the formation of Ni and Co nanocrystallites and the increase of P content. The parallel coercivity (i.e. the applied field perpendicular to the membrane surface) maximum as high as 2500 Oe with squareness ratio up to 0.8 is achieved at the electrolyte temperature of 328 K. It has been demonstrated that the parallel coercivity of CoNiP NWs can be tuned in a wide range of 200-2500 Oe by controlling the electrolyte temperature, providing an easy way to control magnetic properties and thereby for their integration with magnetic-micro-electromechanical systems (MEMS). (C) 2008 Elsevier B.V. All rights reserved.