994 resultados para Detailed models
Resumo:
The power system stability analysis is approached taking into explicit account the dynamic performance of generators internal voltages and control devices. The proposed method is not a direct method in the usual sense since conclusion for stability or instability is not exclusively based on energy function considerations but it is automatic since the conclusion is achieved without an analyst intervention. The stability test accounts for the nonconservative nature of the system with control devices such as the automatic voltage regulator (AVR) and automatic generation control (AGC) in contrast with the well-known direct methods. An energy function is derived for the system with machines forth-order model, AVR and AGC and it is used to start the analysis procedure and to point out criticalities. The conclusive analysis itself is made by means of a method based on the definition of a region surrounding the equilibrium point where the system net torque is equilibrium restorative. This region is named positive synchronization region (PSR). Since the definition of the PSR boundaries have no dependence on modelling approximation, the PSR test conduces to reliable results. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper analyses the boundaries of simplified wind turbine models used to represent the behavior of wind turbines in order to conduct power system stability studies. Based on experimental measurements, the response of recent simplified (also known as generic) wind turbine models that are currently being developed by the International Standard IEC 61400-27 is compared to complex detailed models elaborated by wind turbine manufacturers. This International Standard, whose Technical Committee was convened in October 2009, is focused on defining generic simulation models for both wind turbines (Part 1) and wind farms (Part 2). The results of this work provide an improved understanding of the usability of generic models for conducting power system simulations.
Resumo:
Increased rotational speed brings many advantages to an electric motor. One of the benefits is that when the desired power is generated at increased rotational speed, the torque demanded from the rotor decreases linearly, and as a consequence, a motor of smaller size can be used. Using a rotor with high rotational speed in a system with mechanical bearings can, however, create undesirable vibrations, and therefore active magnetic bearings (AMBs) are often considered a good option for the main bearings, as the rotor then has no mechanical contact with other parts of the system but levitates on the magnetic forces. On the other hand, such systems can experience overloading or a sudden shutdown of the electrical system, whereupon the magnetic field becomes extinct, and as a result of rotor delevitation, mechanical contact occurs. To manage such nonstandard operations, AMB-systems require mechanical touchdown bearings with an oversized bore diameter. The need for touchdown bearings seems to be one of the barriers preventing greater adoption of AMB technology, because in the event of an uncontrolled touchdown, failure may occur, for example, in the bearing’s cage or balls, or in the rotor. This dissertation consists of two parts: First, touchdown bearing misalignment in the contact event is studied. It is found that misalignment increases the likelihood of a potentially damaging whirling motion of the rotor. A model for analysis of the stresses occurring in the rotor is proposed. In the studies of misalignment and stresses, a flexible rotor using a finite element approach is applied. Simplified models of cageless and caged bearings are used for the description of touchdown bearings. The results indicate that an increase in misalignment can have a direct influence on the bending and shear stresses occurring in the rotor during the contact event. Thus, it was concluded that analysis of stresses arising in the contact event is essential to guarantee appropriate system dimensioning for possible contact events with misaligned touchdown bearings. One of the conclusions drawn from the first part of the study is that knowledge of the forces affecting the balls and cage of the touchdown bearings can enable a more reliable estimation of the service life of the bearing. Therefore, the second part of the dissertation investigates the forces occurring in the cage and balls of touchdown bearings and introduces two detailed models of touchdown bearings in which all bearing parts are modelled as independent bodies. Two multibody-based two-dimensional models of touchdown bearings are introduced for dynamic analysis of the contact event. All parts of the bearings are modelled with geometrical surfaces, and the bodies interact with each other through elastic contact forces. To assist in identification of the forces affecting the balls and cage in the contact event, the first model describes a touchdown bearing without a cage, and the second model describes a touchdown bearing with a cage. The introduced models are compared with the simplified models used in the first part of the dissertation through parametric study. Damages to the rotor, cage and balls are some of the main reasons for failures of AMB-systems. The stresses in the rotor in the contact event are defined in this work. Furthermore, the forces affecting key bodies of the bearings, cage and balls can be studied using the models of touchdown bearings introduced in this dissertation. Knowledge obtained from the introduced models is valuable since it can enable an optimum structure for a rotor and touchdown bearings to be designed.
Resumo:
The research aimed to establish tyre-road noise models by using a Data Mining approach that allowed to build a predictive model and assess the importance of the tested input variables. The data modelling took into account three learning algorithms and three metrics to define the best predictive model. The variables tested included basic properties of pavement surfaces, macrotexture, megatexture, and uneven- ness and, for the first time, damping. Also, the importance of those variables was measured by using a sensitivity analysis procedure. Two types of models were set: one with basic variables and another with complex variables, such as megatexture and damping, all as a function of vehicles speed. More detailed models were additionally set by the speed level. As a result, several models with very good tyre-road noise predictive capacity were achieved. The most relevant variables were Speed, Temperature, Aggregate size, Mean Profile Depth, and Damping, which had the highest importance, even though influenced by speed. Megatexture and IRI had the lowest importance. The applicability of the models developed in this work is relevant for trucks tyre-noise prediction, represented by the AVON V4 test tyre, at the early stage of road pavements use. Therefore, the obtained models are highly useful for the design of pavements and for noise prediction by road authorities and contractors.
Resumo:
L'objectiu d'aquest projecte ha estat el desenvolupament d'algorismes biològicament inspirats per a l'olfacció artificial. Per a assolir-lo ens hem basat en el paradigma de les màquines amb suport vectorial. Hem construit algoritmes que imitaven els processos computacionals dels diferents sistemes que formen el sistema olfactiu dels insectes, especialment de la llagosta Schistocerca gregaria. Ens hem centrat en el lòbuls de les antenes, i en el cos fungiforme. El primer està considerat un dispositiu de codificació de les olors, que a partir de la resposta temporal dels receptors olfactius a les antenes genera un patró d'activació espaial i temporal. Quant al cos fungiforme es considera que la seva funció és la d'una memòria per als olors, així com un centre per a la integració multi-sensorial. El primer pas ha estat la construcció de models detallats dels dos sistemes. A continuació, hem utilitzat aquests models per a processar diferents tipus de senyals amb l'objectiu de abstraure els principis computacionals subjacents. Finalment, hem avaluat les capacitats d'aquests models abstractes, i els hem utilitzat per al processat de dades provinents de sensors de gasos. Els resultats mostren que el models abstractes tenen millor comportament front el soroll i més capacitat d'emmagatzematge de records que altres models més clàssics, com ara les memòries associatives de Hopfield o fins i tot en determinades circumstàncies que les mateixes Support Vector Machines.
Resumo:
Combining geological knowledge with proved plus probable ('2P') oil discovery data indicates that over 60 countries are now past their resource-limited peak of conventional oil production. The data show that the global peak of conventional oil production is close. Many analysts who rely only on proved ('1P') oil reserves data draw a very different conclusion. But proved oil reserves contain no information about the true size of discoveries, being variously under-reported, over-reported and not reported. Reliance on 1P data has led to a number of misconceptions, including the notion that past oil forecasts were incorrect, that oil reserves grow very significantly due to technology gain, and that the global supply of oil is ensured provided sufficient investment is forthcoming to 'turn resources into reserves'. These misconceptions have been widely held, including within academia, governments, some oil companies, and organisations such as the IEA. In addition to conventional oil, the world contains large quantities of non-conventional oil. Most current detailed models show that past the conventional oil peak the non-conventional oils are unlikely to come on-stream fast enough to offset conventional's decline. To determine the extent of future oil supply constraints calculations are required to determine fundamental rate limits for the production of non-conventional oils, as well as oil from gas, coal and biomass, and of oil substitution. Such assessments will need to examine technological readiness and lead-times, as well as rate constraints on investment, pollution, and net-energy return. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Using high-time-resolution (72 ms) spectroscopy of AE Aqr obtained with LRIS on Keck II we have determined the spectrum and spectral evolution of a small flare. Continuum and integrated line fluxes in the flare spectrum are measured, and the evolution of the flare is parametrized for future comparison with detailed models of the flares. We find that the velocities of the flaring components are consistent with those previously reported for AE Aqr by Welsh, Horne & Gomer and Horne. The characteristics of the 33-s oscillations are investigated: we derive the oscillation amplitude spectrum, and from that determine the spectrum of the heated regions on the rotating white dwarf. Blackbody fits to the major and minor pulse spectra and an analysis of the emission-line oscillation properties highlight the shortfalls in the simple hotspot model for the oscillations.
Resumo:
The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.
Resumo:
The present study provides a methodology that gives a predictive character the computer simulations based on detailed models of the geometry of a porous medium. We using the software FLUENT to investigate the flow of a viscous Newtonian fluid through a random fractal medium which simplifies a two-dimensional disordered porous medium representing a petroleum reservoir. This fractal model is formed by obstacles of various sizes, whose size distribution function follows a power law where exponent is defined as the fractal dimension of fractionation Dff of the model characterizing the process of fragmentation these obstacles. They are randomly disposed in a rectangular channel. The modeling process incorporates modern concepts, scaling laws, to analyze the influence of heterogeneity found in the fields of the porosity and of the permeability in such a way as to characterize the medium in terms of their fractal properties. This procedure allows numerically analyze the measurements of permeability k and the drag coefficient Cd proposed relationships, like power law, for these properties on various modeling schemes. The purpose of this research is to study the variability provided by these heterogeneities where the velocity field and other details of viscous fluid dynamics are obtained by solving numerically the continuity and Navier-Stokes equations at pore level and observe how the fractal dimension of fractionation of the model can affect their hydrodynamic properties. This study were considered two classes of models, models with constant porosity, MPC, and models with varying porosity, MPV. The results have allowed us to find numerical relationship between the permeability, drag coefficient and the fractal dimension of fractionation of the medium. Based on these numerical results we have proposed scaling relations and algebraic expressions involving the relevant parameters of the phenomenon. In this study analytical equations were determined for Dff depending on the geometrical parameters of the models. We also found a relation between the permeability and the drag coefficient which is inversely proportional to one another. As for the difference in behavior it is most striking in the classes of models MPV. That is, the fact that the porosity vary in these models is an additional factor that plays a significant role in flow analysis. Finally, the results proved satisfactory and consistent, which demonstrates the effectiveness of the referred methodology for all applications analyzed in this study.
Resumo:
This paper deals with hybrid method for transient stability analysis combining time domain simulation and a direct method. Nowadays, the step-by-step simulation is the best available tool for allowing the uses of detailed models and for providing reliable results. The main limitation of this approach involves the large time of computational simulations and the absence of stability margin. On the other hand, direct methods, that demand less CPU time, did not show ample reliability and applicability yet. The best way seems to be using hybrid solutions, in which a direct method is incorporated in a time domain simulation tool. This work has studied a direct method using the transient potential and kinetic energy of the critical machine only. In this paper the critical machine is identified by a fast and efficient method, and the proposal is new for using to get stability margins from hybrid approaches. Results from systems, like 16-machine, show stability indices to dynamic security assessment. © 2001 IEEE.
Resumo:
This paper explains why the reliability assessment of energy limited systems requires more detailed models for primary generating resources availability, internal and external generating dispatch and customer demand than the ones commonly used for large power systems and presents a methodology based on the full sequential Montecarlo simulation technique with AC power flow for their long term reliability assessment which can properly include these detailed models. By means of a real example, it is shown how the simplified modeling traditionally used for large power systems leads to pessimistic predictions if it is applied to an energy limited system and also that it cannot predict all the load point adequacy problems. © 2006 IEEE.
Resumo:
The representation of real objects in virtual environments has applications in many areas, such as cartography, mixed reality and reverse engineering. The generation of these objects can be performed in two ways: manually, with CAD (Computer Aided Design) tools, or automatically, by means of surface reconstruction techniques. The simpler the 3D model, the easier it is to process and store it. Multiresolution reconstruction methods can generate polygonal meshes in different levels of detail and, to improve the response time of a computer program, distant objects can be represented with few details, while more detailed models are used in closer objects. This work presents a new approach to multiresolution surface reconstruction, particularly interesting to noisy and low definition data, for example, point clouds captured with Kinect sensor
Resumo:
Ventricular cells are immersed in a bath of electrolytes and these ions are essential for a healthy heart and a regular rhythm. Maintaining physiological concentration of them is fundamental for reducing arrhythmias and risk of sudden cardiac death, especially in haemodialysis patients and in the heart diseases treatments. Models of electrically activity of the heart based on mathematical formulation are a part of the efforts to improve the understanding and prediction of heart behaviour. Modern models incorporate the extensive and ever increasing amounts of experimental data in incorporating biophysically detailed mechanisms to allow the detailed study of molecular and subcellular mechanisms of heart disease. The goal of this project was to simulate the effects of changes in potassium and calcium concentrations in the extracellular space between experimental data and and a description incorpored into two modern biophysically detailed models (Grandi et al. Model; O’Hara Rudy Model). Moreover the task was to analyze the changes in the ventricular electrical activity, in particular by studying the modifications on the simulated electrocardiographic signal. We used the cellular information obtained by the heart models in order to build a 1D tissue description. The fibre is composed by 165 cells, it is divided in four groups to differentiate the cell types that compound human ventricular tissue. The main results are the following: Grandi et al. (GBP) model is not even able to reproduce the correct action potential profile in hyperkalemia. Data from hospitalized patients indicates that the action potential duration (APD) should be shorter than physiological state but in this model we have the opposite. From the potassium point of view the results obtained by using O’Hara model (ORD) are in agreement with experimental data for the single cell action potential in hypokalemia and hyperkalemia, most of the currents follow the data from literature. In the 1D simulations we were able to reproduce ECGs signal in most the potassium concentrations we selected for this study and we collected data that can help physician in understanding what happens in ventricular cells during electrolyte disorder. However the model fails in the conduction of the stimulus under hyperkalemic conditions. The model emphasized the ECG modifications when the K+ is slightly more than physiological value. In the calcium setting using the ORD model we found an APD shortening in hypocalcaemia and an APD lengthening in hypercalcaemia, i.e. the opposite to experimental observation. This wrong behaviour is kept in one dimensional simulations bringing a longer QT interval in the ECG under higher [Ca2+]o conditions and vice versa. In conclusion it has highlighted that the actual ventricular models present in literature, even if they are useful in the original form, they need an improvement in the sensitivity of these two important electrolytes. We suggest an use of the GBP model with modifications introduced by Carro et al. who understood that the failure of this model is related to the Shannon et al. model (a rabbit model) from which the GBP model was built. The ORD model should be modified in the Ca2+ - dependent IcaL and in the influence of the Iks in the action potential for letting it him produce a correct action potential under different calcium concentrations. In the 1D tissue maybe a heterogeneity setting of intra and extracellular conductances for the different cell types should improve a reproduction of the ECG signal.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
The biological function of neurons can often be understood only in the context of large, highly interconnected networks. These networks typically form two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations of these areas have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered due to the lack of appropriate simulation tools. This paper introduces the freely available Topographica maplevel simulator, originally developed at the University of Texas at Austin and now maintained at the University of Edinburgh, UK. Topographica is designed to make large-scale, detailed models practical. The goal is to allow neuroscientists and computational scientists to work together to understand how topographic maps and their connections organize and operate. This understanding will be crucial for integrating experimental observations into a comprehensive theory of brain function.