939 resultados para Operational analysis
Resumo:
Variable Speed Limits (VSL) is a control tool of Intelligent Transportation Systems (ITS) which can enhance traffic safety and which has the potential to contribute to traffic efficiency. This study presents the results of a calibration and operational analysis of a candidate VSL algorithm for high flow conditions on an urban motorway of Queensland, Australia. The analysis was done using a framework consisting of a microscopic simulation model combined with runtime API and a proposed efficiency index. The operational analysis includes impacts on speed-flow curve, travel time, speed deviation, fuel consumption and emission.
Resumo:
This paper presents mathematical models for BRT station operation, calibrated using microscopic simulation modelling. Models are presented for station capacity and bus queue length. No reliable model presently exists to estimate bus queue length. The proposed bus queue model is analogous to an unsignalized intersection queuing model.
Resumo:
Polar lows are maritime meso-cyclones associated with intense surface wind speeds and oceanic heat fluxes at high latitudes. The ability of the ERA-Interim (ERAI) reanalysis to represent polar lows in the North Atlantic is assessed by comparing ERAI and the ECMWF operational analysis for the period 2008-2011. First, the representation of a set of satellite observed polar lows over the Norwegian and Barents Seas in the operational analysis and ERAI is analysed. Then, the possibility of directly identifying and tracking the polar lows in the operational analysis and ERAI is explored using a tracking algorithm based on 850 hPa vorticity with objective identification criteria on cyclone dynamical intensity and atmospheric static stability. All but one of the satellite observed polar lows with a lifetime of at least 6 hours have an 850 hPa vorticity signature of a co-located mesocyclone in both the operational analysis and ERAI for most of their life cycles. However, the operational analysis has vorticity structures that better resemble the observed cloud patterns and stronger surface wind speed intensities compared to those in ERAI. By applying the objective identification criteria, about 55% of the satellite observed polar lows are identified and tracked in ERAI, while this fraction increases to about 70% in the operational analysis. Particularly in ERAI, the remaining observed polar lows are mainly not identified because they have too weak wind speed and vorticity intensity compared to the tested criteria. The implications of the tendency of ERAI to underestimate the polar low dynamical intensity for future studies of polar lows is discussed.
Resumo:
Wind energy is one of the most promising and fast growing sector of energy production. Wind is ecologically friendly and relatively cheap energy resource available for development in practically all corners of the world (where only the wind blows). Today wind power gained broad development in the Scandinavian countries. Three important challenges concerning sustainable development, i.e. energy security, climate change and energy access make a compelling case for large-scale utilization of wind energy. In Finland, according to the climate and energy strategy, accepted in 2008, the total consumption of electricity generated by means of wind farms by 2020, should reach 6 - 7% of total consumption in the country [1]. The main challenges associated with wind energy production are harsh operational conditions that often accompany the turbine operation in the climatic conditions of the north and poor accessibility for maintenance and service. One of the major problems that require a solution is the icing of turbine structures. Icing reduces the performance of wind turbines, which in the conditions of a long cold period, can significantly affect the reliability of power supply. In order to predict and control power performance, the process of ice accretion has to be carefully tracked. There are two ways to detect icing – directly or indirectly. The first way applies to the special ice detection instruments. The second one is using indirect characteristics of turbine performance. One of such indirect methods for ice detection and power loss estimation has been proposed and used in this paper. The results were compared to the results directly gained from the ice sensors. The data used was measured in Muukko wind farm, southeast Finland during a project 'Wind power in cold climate and complex terrain'. The project was carried out in 9/2013 - 8/2015 with the partners Lappeenranta university of technology, Alstom renovables España S.L., TuuliMuukko, and TuuliSaimaa.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
This paper presents the operational analysis of the single-phase integrated buck-boost inverter. This topology is able to convert the DC input voltage into AC voltage with a high static gain, low harmonic content and acceptable efficiency, all in one single-stage. Main functionality aspects are explained, design procedure, system modeling and control, and also component requirements are detailed. Main simulation results are included, and two prototypes were implemented and experimentally tested, where its results are compared with those corresponding to similar topologies available in literature. © 2012 IEEE.
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) is a World Weather Research Programme project. One of its main objectives is to enhance collaboration on the development of ensemble prediction between operational centers and universities by increasing the availability of ensemble prediction system (EPS) data for research. This study analyzes the prediction of Northern Hemisphere extratropical cyclones by nine different EPSs archived as part of the TIGGE project for the 6-month time period of 1 February 2008–31 July 2008, which included a sample of 774 cyclones. An objective feature tracking method has been used to identify and track the cyclones along the forecast trajectories. Forecast verification statistics have then been produced [using the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analysis as the truth] for cyclone position, intensity, and propagation speed, showing large differences between the different EPSs. The results show that the ECMWF ensemble mean and control have the highest level of skill for all cyclone properties. The Japanese Meteorological Administration (JMA), the National Centers for Environmental Prediction (NCEP), the Met Office (UKMO), and the Canadian Meteorological Centre (CMC) have 1 day less skill for the position of cyclones throughout the forecast range. The relative performance of the different EPSs remains the same for cyclone intensity except for NCEP, which has larger errors than for position. NCEP, the Centro de Previsão de Tempo e Estudos Climáticos (CPTEC), and the Australian Bureau of Meteorology (BoM) all have faster intensity error growth in the earlier part of the forecast. They are also very underdispersive and significantly underpredict intensities, perhaps due to the comparatively low spatial resolutions of these EPSs not being able to accurately model the tilted structure essential to cyclone growth and decay. There is very little difference between the levels of skill of the ensemble mean and control for cyclone position, but the ensemble mean provides an advantage over the control for all EPSs except CPTEC in cyclone intensity and there is an advantage for propagation speed for all EPSs. ECMWF and JMA have an excellent spread–skill relationship for cyclone position. The EPSs are all much more underdispersive for cyclone intensity and propagation speed than for position, with ECMWF and CMC performing best for intensity and CMC performing best for propagation speed. ECMWF is the only EPS to consistently overpredict cyclone intensity, although the bias is small. BoM, NCEP, UKMO, and CPTEC significantly underpredict intensity and, interestingly, all the EPSs underpredict the propagation speed, that is, the cyclones move too slowly on average in all EPSs.
Resumo:
This paper describes the design and implementation of an agent based network for the support of collaborative switching tasks within the control room environment of the National Grid Company plc. This work includes aspects from several research disciplines, including operational analysis, human computer interaction, finite state modelling techniques, intelligent agents and computer supported co-operative work. Aspects of these procedures have been used in the analysis of collaborative tasks to produce distributed local models for all involved users. These models have been used as the basis for the production of local finite state automata. These automata have then been embedded within an agent network together with behavioural information extracted from the task and user analysis phase. The resulting support system is capable of task and communication management within the transmission despatch environment.
Resumo:
Global horizontal wavenumber kinetic energy spectra and spectral fluxes of rotational kinetic energy and enstrophy are computed for a range of vertical levels using a T799 ECMWF operational analysis. Above 250 hPa, the kinetic energy spectra exhibit a distinct break between steep and shallow spectral ranges, reminiscent of dual power-law spectra seen in aircraft data and high-resolution general circulation models. The break separates a large-scale ‘‘balanced’’ regime in which rotational flow strongly dominates divergent flow and a mesoscale ‘‘unbalanced’’ regime where divergent energy is comparable to or larger than rotational energy. Between 230 and 100 hPa, the spectral break shifts to larger scales (from n 5 60 to n 5 20, where n is spherical harmonic index) as the balanced component of the flow preferentially decays. The location of the break remains fairly stable throughout the stratosphere. The spectral break in the analysis occurs at somewhat larger scales than the break seen in aircraft data. Nonlinear spectral fluxes defined for the rotational component of the flow maximize between about 300 and 200 hPa. Large-scale turbulence thus centers on the extratropical tropopause region, within which there are two distinct mechanisms of upscale energy transfer: eddy–eddy interactions sourcing the transient energy peak in synoptic scales, and zonal mean–eddy interactions forcing the zonal flow. A well-defined downscale enstrophy flux is clearly evident at these altitudes. In the stratosphere, the transient energy peak moves to planetary scales and zonal mean–eddy interactions become dominant.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
A remoção de plantas aquáticas tem sido utilizada como opção ao controle químico e biológico, em razão de restrições ambientais em algumas regiões brasileiras. O objetivo deste trabalho foi desenvolver um modelo para análise econômica e operacional da remoção mecânica de plantas aquáticas, visando realizar estudo econômico comparativo com o controle químico. A operação foi estudada num reservatório de uma usina de bombeamento em Barra do Piraí-RJ. O sistema consiste de retroescavadeiras instaladas em balsas, usadas para cortar as plantas e liberá-las no fluxo de água. Antes da tomada d'água existe uma barreira flutuante que intercepta as plantas, as quais são removidas por um guindaste fixo nas margens. As plantas são armazenadas por algum tempo e depois descartadas. Existe, ainda, um sistema de limpeza das grades da tomada d'água. Dados do volume total de plantas descartadas foram coletados durante 14 meses, assim como foi avaliado o volume de biomassa produzido por área das principais espécies infestantes. A empreiteira que administra o serviço forneceu planilhas de custos e outro parâmetros operacionais. Um modelo foi desenvolvido para calcular custos por hectare de plantas removidas. Os resultados mostraram custo médio mensal de US$ 17.780,28 por hectare. Apesar do alto custo, o sistema de remoção demonstrou capacidade de controlar apenas 4,1% da área infestada no reservatório, na época da coleta dos dados. Simulando dados de uma aplicação de glyphosate, o controle químico custaria apenas 0,23% do custo da remoção. Análises de sensibilidade mostraram que a compactação das plantas para transporte, o volume de plantas produzidas por área e o custo do transporte são os parâmetros principais para a otimização.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)