907 resultados para Strictly positive real systems
Resumo:
The efficiency of the wind power conversions systems can be greatly improved using an appropriate control algorithm. In this work, a sliding mode control for variable speed wind turbine that incorporates a doubly fed induction generator is described. The electrical system incorporates a wound rotor induction machine with back-to-back three phase power converter bridges between its rotor and the grid. In the presented design the so-called vector control theory is applied, in order to simplify the electrical equations. The proposed control scheme uses stator flux-oriented vector control for the rotor side converter bridge control and grid voltage vector control for the grid side converter bridge control. The stability analysis of the proposed sliding mode controller under disturbances and parameter uncertainties is provided using the Lyapunov stability theory. Finally simulated results show, on the one hand, that the proposed controller provides high-performance dynamic characteristics, and on the other hand, that this scheme is robust with respect to the uncertainties that usually appear in the real systems.
Resumo:
Modern wind turbines are designed in order to work in variable speed opera-tions. To perform this task, these turbines are provided with adjustable speed generators, like the double feed induction generator (DFIG). One of the main advantages of adjustable speed generators is improving the system efficiency compared with _xed speed generators, because turbine speed can be adjusted as a function of wind speed in order to maximize the output power. However, this system requires a suitable speed controller in order to track the optimal reference speed of the wind turbine. In this work, a sliding mode control for variable speed wind turbines is proposed. The proposed design also uses the vector oriented control theory in order to simplify the DFIG dynamical equations. The stability analysis of the proposed controller has been carried out under wind variations and pa-rameter uncertainties using the Lyapunov stability theory. Finally, the simulated results show on the one hand that the proposed controller provides a high-performance dynamic behavior, and on the other hand that this scheme is robust with respect to parameter uncertainties and wind speed variations, which usually appear in real systems.
Resumo:
Modern wind turbines are designed in order to work in variable speed operations. To perform this task, wind turbines are provided with adjustable speed generators, like the double feed induction generator. One of the main advantage of adjustable speed generators is improving the system efficiency compared to fixed speed generators, because turbine speed can be adjusted as a function of wind speed in order to maximize the output power. However this system requires a suitable speed controller in order to track the optimal reference speed of the wind turbine. In this work, a sliding mode control for variable speed wind turbines is proposed. An integral sliding surface is used, because the integral term avoids the use of the acceleration signal, which reduces the high frequency components in the sliding variable. The proposed design also uses the vector oriented control theory in order to simplify the generator dynamical equations. The stability analysis of the proposed controller has been carried out under wind variations and parameter uncertainties by using the Lyapunov stability theory. Finally simulated results show, on the one hand that the proposed controller provides a high-performance dynamic behavior, and on the other hand that this scheme is robust with respect to parameter uncertainties and wind speed variations, that usually appear in real systems.
Resumo:
EFTA 2009
Resumo:
This thesis describes the theoretical solution and experimental verification of phase conjugation via nondegenerate four-wave mixing in resonant media. The theoretical work models the resonant medium as a two-level atomic system with the lower state of the system being the ground state of the atom. Working initially with an ensemble of stationary atoms, the density matrix equations are solved by third-order perturbation theory in the presence of the four applied electro-magnetic fields which are assumed to be nearly resonant with the atomic transition. Two of the applied fields are assumed to be non-depleted counterpropagating pump waves while the third wave is an incident signal wave. The fourth wave is the phase conjugate wave which is generated by the interaction of the three previous waves with the nonlinear medium. The solution of the density matrix equations gives the local polarization of the atom. The polarization is used in Maxwell's equations as a source term to solve for the propagation and generation of the signal wave and phase conjugate wave through the nonlinear medium. Studying the dependence of the phase conjugate signal on the various parameters such as frequency, we show how an ultrahigh-Q isotropically sensitive optical filter can be constructed using the phase conjugation process.
In many cases the pump waves may saturate the resonant medium so we also present another solution to the density matrix equations which is correct to all orders in the amplitude of the pump waves since the third-order solution is correct only to first-order in each of the field amplitudes. In the saturated regime, we predict several new phenomena associated with degenerate four-wave mixing and also describe the ac Stark effect and how it modifies the frequency response of the filtering process. We also show how a narrow bandwidth optical filter with an efficiency greater than unity can be constructed.
In many atomic systems the atoms are moving at significant velocities such that the Doppler linewidth of the system is larger than the homogeneous linewidth. The latter linewidth dominates the response of the ensemble of stationary atoms. To better understand this case the density matrix equations are solved to third-order by perturbation theory for an atom of velocity v. The solution for the polarization is then integrated over the velocity distribution of the macroscopic system which is assumed to be a gaussian distribution of velocities since that is an excellent model of many real systems. Using the Doppler broadened system, we explain how a tunable optical filter can be constructed whose bandwidth is limited by the homogeneous linewidth of the atom while the tuning range of the filter extends over the entire Doppler profile.
Since it is a resonant system, sodium vapor is used as the nonlinear medium in our experiments. The relevant properties of sodium are discussed in great detail. In particular, the wavefunctions of the 3S and 3P states are analyzed and a discussion of how the 3S-3P transition models a two-level system is given.
Using sodium as the nonlinear medium we demonstrate an ultrahigh-Q optical filter using phase conjugation via nondegenerate four-wave mixing as the filtering process. The filter has a FWHM bandwidth of 41 MHz and a maximum efficiency of 4 x 10-3. However, our theoretical work and other experimental work with sodium suggest that an efficient filter with both gain and a narrower bandwidth should be quite feasible.
Resumo:
Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.
The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.
The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.
To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.
Resumo:
O estudo dos diferentes fenômenos de separação tem sido cada vez mais importante para os diferentes ramos da indústria e ciência. Devido à grande capacidade computacional atual, é possível modelar e analisar os fenômenos cromatográficos a nível microscópico. Os modelos de rede vêm sendo cada vez mais utilizados, para representar processos de separação por cromatografia, pois através destes pode-se representar os aspectos topológicos e morfológicos dos diferentes materiais adsorventes disponíveis no mercado. Neste trabalho visamos o desenvolvimento de um modelo de rede tridimensional para representação de uma coluna cromatográfica, a nível microscópico, onde serão modelados os fenômenos de adsorção, dessorção e dispersão axial através de um método estocástico. Também foram utilizadas diferentes abordagens com relação ao impedimento estérico Os resultados obtidos foram comparados a resultados experimentais. Depois é utilizado um modelo de rede bidimensional para representar um sistema de adsorção do tipo batelada, mantendo-se a modelagem dos fenômenos de adsorção e dessorção, e comparados a sistemas reais posteriormente. Em ambos os sistemas modelados foram analisada as constantes de equilíbrio, parâmetro fundamental nos sistemas de adsorção, e por fim foram obtidas e analisadas isotermas de adsorção. Foi possível concluir que, para os modelos de rede, os fenômenos de adsorção e dessorção bastam para obter perfis de saída similares aos vistos experimentalmente, e que o fenômeno da dispersão axial influência menos que os fenômenos cinéticos em questão
Resumo:
The matrices studied here are positive stable (or briefly stable). These are matrices, real or complex, whose eigenvalues have positive real parts. A theorem of Lyapunov states that A is stable if and only if there exists H ˃ 0 such that AH + HA* = I. Let A be a stable matrix. Three aspects of the Lyapunov transformation LA :H → AH + HA* are discussed.
1. Let C1 (A) = {AH + HA* :H ≥ 0} and C2 (A) = {H: AH+HA* ≥ 0}. The problems of determining the cones C1(A) and C2(A) are still unsolved. Using solvability theory for linear equations over cones it is proved that C1(A) is the polar of C2(A*), and it is also shown that C1 (A) = C1(A-1). The inertia assumed by matrices in C1(A) is characterized.
2. The index of dissipation of A was defined to be the maximum number of equal eigenvalues of H, where H runs through all matrices in the interior of C2(A). Upper and lower bounds, as well as some properties of this index, are given.
3. We consider the minimal eigenvalue of the Lyapunov transform AH+HA*, where H varies over the set of all positive semi-definite matrices whose largest eigenvalue is less than or equal to one. Denote it by ψ(A). It is proved that if A is Hermitian and has eigenvalues μ1 ≥ μ2…≥ μn ˃ 0, then ψ(A) = -(μ1-μn)2/(4(μ1 + μn)). The value of ψ(A) is also determined in case A is a normal, stable matrix. Then ψ(A) can be expressed in terms of at most three of the eigenvalues of A. If A is an arbitrary stable matrix, then upper and lower bounds for ψ(A) are obtained.
Resumo:
Esta dissertação aplica a regularização por entropia máxima no problema inverso de apreçamento de opções, sugerido pelo trabalho de Neri e Schneider em 2012. Eles observaram que a densidade de probabilidade que resolve este problema, no caso de dados provenientes de opções de compra e opções digitais, pode ser descrito como exponenciais nos diferentes intervalos da semireta positiva. Estes intervalos são limitados pelos preços de exercício. O critério de entropia máxima é uma ferramenta poderosa para regularizar este problema mal posto. A família de exponencial do conjunto solução, é calculado usando o algoritmo de Newton-Raphson, com limites específicos para as opções digitais. Estes limites são resultados do princípio de ausência de arbitragem. A metodologia foi usada em dados do índice de ação da Bolsa de Valores de São Paulo com seus preços de opções de compra em diferentes preços de exercício. A análise paramétrica da entropia em função do preços de opções digitais sínteticas (construídas a partir de limites respeitando a ausência de arbitragem) mostraram valores onde as digitais maximizaram a entropia. O exemplo de extração de dados do IBOVESPA de 24 de janeiro de 2013, mostrou um desvio do princípio de ausência de arbitragem para as opções de compra in the money. Este princípio é uma condição necessária para aplicar a regularização por entropia máxima a fim de obter a densidade e os preços. Nossos resultados mostraram que, uma vez preenchida a condição de convexidade na ausência de arbitragem, é possível ter uma forma de smile na curva de volatilidade, com preços calculados a partir da densidade exponencial do modelo. Isto coloca o modelo consistente com os dados do mercado. Do ponto de vista computacional, esta dissertação permitiu de implementar, um modelo de apreçamento que utiliza o princípio de entropia máxima. Três algoritmos clássicos foram usados: primeiramente a bisseção padrão, e depois uma combinação de metodo de bisseção com Newton-Raphson para achar a volatilidade implícita proveniente dos dados de mercado. Depois, o metodo de Newton-Raphson unidimensional para o cálculo dos coeficientes das densidades exponenciais: este é objetivo do estudo. Enfim, o algoritmo de Simpson foi usado para o calculo integral das distribuições cumulativas bem como os preços do modelo obtido através da esperança matemática.
Resumo:
Clare, A. and King R.D. (2003) Data mining the yeast genome in a lazy functional language. In Practical Aspects of Declarative Languages (PADL'03) (won Best/Most Practical Paper award).
Resumo:
We study axiomatically situations in which the society agrees to treat voters with different characteristics distinctly. In this setting, we propose a set of intuitive axioms and show that they jointly characterize a new class of voting procedures, called Type-weighted Approval Voting. According to this family, each voter has a strictly positive and finite weight (the weight is necessarily the same for all voters with the same characteristics) and the alternative with the highest number of weighted votes is elected. The implemented voting procedure reduces to Approval Voting in case all voters are identical or the procedure assigns the same weight to all types. Using this idea, we also obtain a new characterization of Approval Voting.
Resumo:
Laboratory studies were conducted to investigate the interactions of nanoparticles (NPs) formed via simulated cloud processing of mineral dust with seawater under environmentally relevant conditions. The effect of sunlight and the presence of exopolymeric substances (EPS) were assessed on the: (1) colloidal stability of the nanoparticle aggregates (i.e. size distribution, zeta potential, polydispersity); (2) micromorphology and (3) Fe dissolution from particles. We have demonstrated that: (i) synthetic nano-ferrihydrite has distinct aggregation behaviour from NPs formed from mineral dusts in that the average hydrodynamic diameter remained unaltered upon dispersion in seawater (~1500 nm), whilst all dust derived NPs increased about three fold in aggregate size; (ii) relatively stable and monodisperse aggregates of NPs formed during simulated cloud processing of mineral dust become more polydisperse and unstable in contact with seawater; (iii) EPS forms stable aggregates with both the ferrihydrite and the dust derived NPs whose hydrodynamic diameter remains unchanged in seawater over 24h; (iv) dissolved Fe concentration from NPs, measured here as <3 kDa filter-fraction, is consistently >30% higher in seawater in the presence of EPS and the effect is even more pronounced in the absence of light; (v) micromorphology of nanoparticles from mineral dusts closely resemble that of synthetic ferrihydrite in MQ water, but in seawater with EPS they form less compact aggregates, highly variable in size, possibly due to EPS-mediated steric and electrostatic interactions. The larger scale implications on real systems of the EPS solubilising effect on Fe and other metals with the additional enhancement of colloidal stability of the resulting aggregates are discussed.
Resumo:
The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.
Resumo:
Community structure depends on both deterministic and stochastic processes. However, patterns of community dissimilarity (e.g. difference in species composition) are difficult to interpret in terms of the relative roles of these processes. Local communities can be more dissimilar (divergence) than, less dissimilar (convergence) than, or as dissimilar as a hypothetical control based on either null or neutral models. However, several mechanisms may result in the same pattern, or act concurrently to generate a pattern, and much research has recently been focusing on unravelling these mechanisms and their relative contributions. Using a simulation approach, we addressed the effect of a complex but realistic spatial structure in the distribution of the niche axis and we analysed patterns of species co-occurrence and beta diversity as measured by dissimilarity indices (e.g. Jaccard index) using either expectations under a null model or neutral dynamics (i.e., based on switching off the niche effect). The strength of niche processes, dispersal, and environmental noise strongly interacted so that niche-driven dynamics may result in local communities that either diverge or converge depending on the combination of these factors. Thus, a fundamental result is that, in real systems, interacting processes of community assembly can be disentangled only by measuring traits such as niche breadth and dispersal. The ability to detect the signal of the niche was also dependent on the spatial resolution of the sampling strategy, which must account for the multiple scale spatial patterns in the niche axis. Notably, some of the patterns we observed correspond to patterns of community dissimilarities previously observed in the field and suggest mechanistic explanations for them or the data required to solve them. Our framework offers a synthesis of the patterns of community dissimilarity produced by the interaction of deterministic and stochastic determinants of community assembly in a spatially explicit and complex context.
Resumo:
The current theory of catalyst activity in heterogeneous catalysis is mainly obtained from the study of catalysts with mono-phases, while most catalysts in real systems consist of multi-phases, the understanding of which is far short of chemists' expectation. Density functional theory (DFT) and micro-kinetics simulations are used to investigate the activities of six mono-phase and nine bi-phase catalysts, using CO hydrogenation that is arguably the most typical reaction in heterogeneous catalysis. Excellent activities that are beyond the activity peak of traditional mono-phase volcano curves are found on some bi-phase surfaces. By analyzing these results, a new framework to understand the unexpected activities of bi-phase surfaces is proposed. Based on the framework, several principles for the design of multi-phase catalysts are suggested. The theoretical framework extends the traditional catalysis theory to understand more complex systems.