927 resultados para optimal reactive power flow
Resumo:
This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
The aim of this study is to assess the experience of flow and its relationship with the personality traits and the age of the adolescents. For this purpose, 224 participants of both sexes were selected, aged 12-20 years, who were examined with various tools: Flow State in adolescents (Leibovich de Figueroa; Schmidt, 2013). This is a self-report technique of 28 items that assesses the Flow State, covering all the aspects theoretically listed as components in the optimal experience of enjoyment. And a self-report Being a teenager nowadays, which evaluates 33 pairs of opposite personality characteristics that represent the personality domains of the NEO-PI-R (Costa; McCrae, 1992. Costa; McCrae, 2005, Leibovich; Schmidt, 2005). Among the found results, it was observed that in the adolescents with high scores on the scale of Flow State, the main personality trait was extroversion. Also, the influence of age on optimal flow experience appears in the chosen activities
Resumo:
The aim of this study is to assess the experience of flow and its relationship with the personality traits and the age of the adolescents. For this purpose, 224 participants of both sexes were selected, aged 12-20 years, who were examined with various tools: Flow State in adolescents (Leibovich de Figueroa; Schmidt, 2013). This is a self-report technique of 28 items that assesses the Flow State, covering all the aspects theoretically listed as components in the optimal experience of enjoyment. And a self-report Being a teenager nowadays, which evaluates 33 pairs of opposite personality characteristics that represent the personality domains of the NEO-PI-R (Costa; McCrae, 1992. Costa; McCrae, 2005, Leibovich; Schmidt, 2005). Among the found results, it was observed that in the adolescents with high scores on the scale of Flow State, the main personality trait was extroversion. Also, the influence of age on optimal flow experience appears in the chosen activities
Resumo:
The aim of this study is to assess the experience of flow and its relationship with the personality traits and the age of the adolescents. For this purpose, 224 participants of both sexes were selected, aged 12-20 years, who were examined with various tools: Flow State in adolescents (Leibovich de Figueroa; Schmidt, 2013). This is a self-report technique of 28 items that assesses the Flow State, covering all the aspects theoretically listed as components in the optimal experience of enjoyment. And a self-report Being a teenager nowadays, which evaluates 33 pairs of opposite personality characteristics that represent the personality domains of the NEO-PI-R (Costa; McCrae, 1992. Costa; McCrae, 2005, Leibovich; Schmidt, 2005). Among the found results, it was observed that in the adolescents with high scores on the scale of Flow State, the main personality trait was extroversion. Also, the influence of age on optimal flow experience appears in the chosen activities
Resumo:
The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.
Resumo:
Literature on agency problems arising between controlling and minority owners claim that separation of cash flow and control rights allows controllers to expropriate listed firms, and further that separation emerges when dual class shares or pyramiding corporate structures exist. Dual class share and pyramiding coexisted in listed companies of China until discriminated share reform was implemented in 2005. This paper presents a model of controller to expropriate behavior as well as empirical tests of expropriation via particular accounting items and pyramiding generated expropriation. Results show that expropriation is apparent for state controlled listed companies. While reforms have weakened the power to expropriate, separation remains and still generates expropriation. Size of expropriation is estimated to be 7 to 8 per cent of total asset at mean. If the "one share, one vote" principle were to be realized, asset inflation could be reduced by 13 percent.
Resumo:
An EMI filter for a three-phase buck-type medium power pulse-width modulation rectifier is designed. This filter considers differential mode noise and complies with MIL-STD- 461E for the frequency range of 10kHz to 10MHz. In industrial applications, the frequency range of the standard starts at 150kHz and the designer typically uses a switching frequency of 28kHz because the fifth harmonic is out of the range. This approach is not valid for aircraft applications. In order to design the switching frequency in aircraft applications, the power losses in the semiconductors and the weight of the reactive components should be considered. The proposed design is based on a harmonic analysis of the rectifier input current and an analytical study of the input filter. The classical industrial design does not consider the inductive effect in the filter design because the grid frequency is 50/60Hz. However, in the aircraft applications, the grid frequency is 400Hz and the inductance cannot be neglected. The proposed design considers the inductance and the capacitance effect of the filter in order to obtain unitary power factor at full power. In the optimization process, several filters are designed for different switching frequencies of the converter. In addition, designs from single to five stages are considered. The power losses of the converter plus the EMI filter are estimated at these switching frequencies. Considering overall losses and minimal filter volume, the optimal switching frequency is selected
Resumo:
An EMI filter for a three-phase buck-type medium power pulse-width modulation rectifier is designed. This filter considers differential mode noise and complies with MIL-STD-461E for the frequency range of 10kHz to 10MHz. In industrial applications, the frequency range of the standard starts at 150kHz and the designer typically uses a switching frequency of 28kHz because the fifth harmonic is out of the range. This approach is not valid for aircraft applications. In order to design the switching frequency in aircraft applications, the power losses in the semiconductors and the weight of the reactive components should be considered. The proposed design is based on a harmonic analysis of the rectifier input current and an analytical study of the input filter. The classical industrial design does not consider the inductive effect in the filter design because the grid frequency is 50/60Hz. However, in the aircraft applications, the grid frequency is 400Hz and the inductance cannot be neglected. The proposed design considers the inductance and the capacitance effect of the filter in order to obtain unitary power factor at full power. In the optimization process, several filters are designed for different switching frequencies of the converter. In addition, designs from single to five stages are considered. The power losses of the converter plus the EMI filter are estimated at these switching frequencies. Considering overall losses and minimal filter volume, the optimal switching frequency is selected.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Abstract interpretation-based data-flow analysis of logic programs is at this point relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Such problems relate to dealing correctly with all builtins, including meta-logical and extra-logical predicates, with dynamic predicates (where the program is modified during execution), and with the absence of certain program text during compilation. Existing proposals for dealing with such issues generally restrict in one way or another the classes of programs which can be analyzed if the information from analysis is to be used for program optimization. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially following the recently proposed ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs using the full power of the language.
Resumo:
Massive integration of renewable energy sources in electrical power systems of remote islands is a subject of current interest. The increasing cost of fossil fuels, transport costs to isolated sites and environmental concerns constitute a serious drawback to the use of conventional fossil fuel plants. In a weak electrical grid, as it is typical on an island, if a large amount of conventional generation is substituted by renewable energy sources, power system safety and stability can be compromised, in the case of large grid disturbances. In this work, a model for transient stability analysis of an isolated electrical grid exclusively fed from a combination of renewable energy sources has been studied. This new generation model will be installed in El Hierro Island, in Spain. Additionally, an operation strategy to coordinate the generation units (wind, hydro) is also established. Attention is given to the assessment of inertial energy and reactive current to guarantee power system stability against large disturbances. The effectiveness of the proposed strategy is shown by means of simulation results.
Resumo:
This paper is concerned with the low dimensional structure of optimal streaks in a wedge flow boundary layer, which have been recently shown to consist of a unique (up to a constant factor) three-dimensional streamwise evolving mode, known as the most unstable streaky mode. Optimal streaks exhibit a still unexplored/unexploited approximate self-similarity (not associated with the boundary layer self-similarity), namely the streamwise velocity re-scaled with their maximum remains almost independent of both the spanwise wavenumber and the streamwise coordinate; the remaining two velocity components instead do not satisfy this property. The approximate self-similar behavior is analyzed here and exploited to further simplify the description of optimal streaks. In particular, it is shown that streaks can be approximately described in terms of the streamwise evolution of the scalar amplitudes of just three one-dimensional modes, providing the wall normal profiles of the streamwise velocity and two combinations of the cross flow velocity components; the scalar amplitudes obey a singular system of three ordinary differential equations (involving only two degrees of freedom), which approximates well the streamwise evolution of the general streaks.
Resumo:
Envelope Tracking (ET) and Envelope Elimination and Restoration (EER) are two techniques that have been used as a solution for highly efficient linear RF Power Amplifiers (PA). In both techniques the most important part is a dc-dc converter called envelope amplifier that has to supply the RF PA with variable voltage. Besides high efficiency, its bandwidth is very important as well. Envelope amplifier based on parallel combination of a switching dc-dc converter and a linear regulator is an architecture that is widely used due to its simplicity. In this paper we discuss about theoretical limitations of this architecture regarding its efficiency and we demonstrate two possible way of its implementation. In order to derive the presented conclusions, a theoretical model of envelope amplifier's efficiency has been presented. Additionally, the benefits of the new emerging GaN technology for this application have been shown as well.