941 resultados para average current control
Resumo:
Voltage source inverters use large electrolytic capacitors in order to decouple the energy between the utility and the load, keeping the DC link voltage constant. Decreasing the capacitance reduces the distortion in the inverter input current but this also affects the load with low-order harmonics and generate disturbances at the input voltage. This paper applies the P+RES controller to solve the challenge of regulating the output current by means of controlling the magnitude of the current space vector, keeping it constant thus rejecting harmonic disturbances that would otherwise propagate to the load. This work presents a discussion of the switching and control strategy. © 2011 IEEE.
Resumo:
Two-stage isolated converters for photovoltaic (PV) applications commonly employ a high-frequency transformer on the DC-DC side, submitting the DC-AC inverter switches to high voltages and forcing the use of IGBTs instead of low-voltage and low-loss MOSFETs. This paper shows the modeling, control and simulation of a single-phase full-bridge inverter with high-frequency transformer (HFT) that can be used as part of a two-stage converter with transformerless DC-DC side or as a single-stage converter (simple DC-AC inverter) for grid-connected PV applications. The inverter is modeled in order to obtain a small-signal transfer function used to design the PResonant current control regulator. A high-frequency step-up transformer results in reduced voltage switches and better efficiency compared with converters in which the transformer is used on the DC-DC side. Simulations and experimental results with a 200 W prototype are shown. © 2012 IEEE.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Ciências Biológicas (Botânica) - IBB
Resumo:
Pós-graduação em Ciência e Tecnologia de Materiais - FC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The ethanol electro-oxidation reaction was studied on carbon-supported Pt, Rh, and on Pt overlayers deposited on Rh nanoparticles. The synthesized electrocatalysts were characterized by TEM and XRD. The reaction products were monitored by on-line DEMS experiments. Potentiodynamic curves showed higher overall reaction rate for Pt/C when compared to that for Rh/C. However, on-line DEMS measurements revealed higher average current efficiencies for complete ethanol electro-oxidation to CO2 on Rh/C. The average current efficiencies for CO2 formation increased with temperature and with the decrease in the ethanol concentration. The total amount of CO2, on the other hand, was slightly affected by the temperature and ethanol concentration. Additionally, the CO2 signal was observed only in the positive-going scan, none being observed in the negative-going scan, evidencing that the C-C bond breaking occurs only at lower potentials. Thus, the formation of CO2 mainly resulted from oxidative removal of adsorbed CO and CHx,ad species generated at the lower potentials, instead of the electrochemical oxidation of bulk ethanol molecules. The acetaldehyde mass signal, however, was greatly favored after increasing the ethanol concentration from 0.01 to 0.1 mol L-1, on both electrocatalysts, indicating that it is the major reaction product. For the Pt/Rh/C-based electrocatalysts, the Faradaic current and the conversion efficiency for CO2 formation was increased by adjusting the amount of Pt on the surface of the Rh/C nanoparticles. The higher conversion efficiency for CO2 formation on the Pt1Rh/C material was ascribed to its faster and more extensive ethanol deprotonation on the Pt-Rh sites, producing adsorbed intermediates in which the C-C bond cleavage is facilitated. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this article, we present a new control chart for monitoring the covariance matrix in a bivariate process. In this method, n observations of the two variables were considered as if they came from a single variable (as a sample of 2n observations), and a sample variance was calculated. This statistic was used to build a new control chart specifically as a VMIX chart. The performance of the new control chart was compared with its main competitors: the generalized sampled variance chart, the likelihood ratio test, Nagao's test, probability integral transformation (v(t)), and the recently proposed VMAX chart. Among these statistics, only the VMAX chart was competitive with the VMIX chart. For shifts in both variances, the VMIX chart outperformed VMAX; however, VMAX showed better performance for large shifts (higher than 10%) in one variance.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
Die Produktion eines spinpolarisierten Strahls mit hohem mittleren Strom ist sowohl für den Betrieb von existierenden polarisierten Quellen als auch in noch stärkerem Maße für geplante zukünftige Projekte wichtig. Die Betriebszeit solcher Quellen wird durch die Abnahme der Quantenausbeute der Photokathode mit der Zeit begrenzt. Die Problematik der Abnahme der Quantenausbeute konnte durch die Reaktion der Kathodenoberfläche mit sauerstoffhaltigen Molekülen sowie durch Ionenbombardement geklärt werden. Im Laufe dieser Arbeit wurden, teilweise zum ersten Mal, Mechanismen untersucht, die zur Entstehung der chemisch aktiven Moleküle und der Ionen beitragen und weitere Effekte, die die Betriebszeit der polarisierten Quellen reduzieren. Die Experimente wurden an einer genauen Kopie der an MAMI vorhandenen polarisierten Quelle durchgeführt. Es wurde demonstriert, dass Erwärmung der Photokathode, Ioneneinfang und Strahlverlust aufgrund der Raumladungskräfte die Kathodenlebensdauer begrenzen können. Der erste Effekt ist Erwärmung der Photokathode. Die Laserleistung wird fast vollständig in Wärmeleistung umgesetzt, was zur Absenkung der Verfügbarkeit der polarisierten Quellen führen kann, und zwar unabhängig davon, ob der Photostrom produziert wird oder nicht. Der zweite Effekt ist Ionenbombardement mit den sowohl in der Beschleunigungsstrecke als auch in der Strahlführung entstehenden Ionen. Es wurde demonstriert, dass der in der Strahlführung entstehende Ionenstrom sogar größer ist als der in der Kanone. Unter bestimmten Bedingungen können die gebildeten Ionen durch das Potenzial des Elektronenstrahls eingefangen werden und die Kanone erreichen und damit zusätzlich zur Zerstörung der negativen Elektronenaffinität beitragen. Der dritte Effekt ist Strahlverlust. Es wurde demonstriert, dass die relativen Strahlverluste kleiner als 1*10-6 sein sollten, um eine Lebensdauer von mehr als 1000 Stunden beim Strom von 100 A zu erreichen, was für die vorhandene Apparatur möglich ist. Zur Erzeugung extrem hoher Ströme wurde zum ersten Mal im Bereich der spinpolarisierten Quellen das Prinzip der „Energierückgewinnung“ eingesetzt. Experimente bei einer mittleren Stromstärke von 11.4 mA und einer Spitzenstromstärke von 57 mA bei 1% Tastverhältnis wurden bereits durchgeführt.
Resumo:
Increasing demand for marketing accountability requires an efficient allocation of marketing expenditures. Managers who know the elasticity of their marketing instruments can allocate their budgets optimally. Meta-analyses offer a basis for deriving benchmark elasticities for advertising. Although they provide a variety of valuable insights, a major shortcoming of prior meta-analyses is that they report only generalized results as the disaggregated raw data are not made available. This problem is highly relevant because coding of empirical studies, at least to a certain extent, involves subjective judgment. For this reason, meta-studies would be more valuable if researchers and practitioners had access to disaggregated data allowing them to conduct further analyses of individual, e.g., product-level-specific, interests. We are the first to address this gap by providing (1) an advertising elasticity database (AED) and (2) empirical generalizations about advertising elasticities and their determinants. Our findings indicate that the average current-period advertising elasticity is 0.09, which is substantially smaller than the value 0f 0.12 that was recently reported by Sethuraman, Tellis, and Briesch (2011). Furthermore, our meta-analysis reveals a wide range of significant determinants of advertising elasticity. For example, we find that advertising elasticities are higher (i) for hedonic and experience goods than for other goods; (ii) for new than for established goods; (iii) when advertising is measured in gross rating points (GRP) instead of absolute terms; and (iv) when the lagged dependent or lagged advertising variable is omitted.
Resumo:
This dataset consists of average water depth, average current velocity and direction and roughness lengths calculated from the spatially-averaged velocity profiles collected with an ADCP along a transect in the Jade Bay in 2008.