981 resultados para NETWORK-ANALYZER CALIBRATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The calibration coefficients of several models of cup and propeller anemometers were analysed. The analysis was based on a series of laboratory calibrations between January 2003 and August 2007. Mean and standard deviation values of calibration coefficients from the anemometers studied were included. Two calibration procedures were used and compared. In the first, recommended by the Measuring network of Wind Energy Institutes (MEASNET), 13 measurement points were taken over a wind speed range of 4 to 16  m  s−1. In the second procedure, 9 measurement points were taken over a wider speed range of 4 to 23  m  s−1. Results indicated no significant differences between the two calibration procedures applied to the same anemometer in terms of measured wind speed and wind turbines' Annual Energy Production (AEP). The influence of the cup anemometers' design on the calibration coefficients was also analysed. The results revealed that the slope of the calibration curve, if based on the rotation frequency and not the anemometer's output frequency, seemed to depend on the cup center rotation radius.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document is summarizing a major part of the work performed by the FP7-JERICO consortium, including 27 partner institutions, during 4 years (2011-2015). Its objective is to propose a strategy for the European coastal observation and monitoring. To do so we give an overview of the main achievements of the FP7-JERICO project. From this overview, gaps are analysed to draw some recommendations for the future. Overview, gaps and recommendation are addressed at both Hardware and Software levels of the JERICO Research Infrastructure. The main part of the document is built upon this analysis to outcome a general strategy for the future, giving priorities to be targeted and some possible funding mechanisms, but also upon discussions held in dedicated JERICO strategy workshops. This document was initiated in 2014 by the coordination team but considering the fact that an overview of the entire project and its achievement were needed to feed this strategy deliverable it couldn’t ended before the end of FP7-JERICO, April 2015. The preparation of the JERICO-NEXT proposal in summer 2014 to answer an H2020 call for proposals pushed the consortium ahead, fed deep thoughts about this strategy but the intention was to not propose a strategy only bounded by the JERICO-NEXT answer. Authors are conscious that writing JERICO-NEXT is even drawing a bias in the thoughts and they tried to be opened. Nevertheless, comments are always welcome to go farther ahead. Structure of the document The Chapter 3 introduces the need of sustained coastal observatories, from different point of view including a short description of the FP7-JERICO project. In Chapter 4, an analysis of the JERICO coastal observatory Hardware (platforms and sensors) in terms of Status at the end of JERICO, identified gaps and recommendations for further development is provided region by region. The main challenges that remain to be overcome is also summarized. Chapter 5 is dedicated the JERICO infrastructure Software (calibration, operation, quality assessment, data management) and the progress made through JERICO on harmonization of procedures and definition of best practices. Chapter 6 provides elements of a strategy towards sustainable and integrated coastal observations for Europe, drawing a roadmap for cost-effective scientific-based consolidation of the present infrastructure while maximizing the potential arising from JERICO in terms of innovation, wealth-creation, and business development. After reading the chapter 3, for who doesn’t know JERICO, any chapter can be read independently. More details are available in the JERICO final reports and its intermediate reports; all are available on the JERICO web site (www.jerico-FP7.eu) as well as any deliverable. Each chapter will list referring JERICO documents. A small bibliographic list is available at the end of this deliverable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light absorption by aerosols has a great impact on climate change. A Photoacoustic spectrometer (PA) coupled with aerosol-based classification techniques represents an in situ method that can quantify the light absorption by aerosols in a real time, yet significant differences have been reported using this method versus filter based methods or the so-called difference method based upon light extinction and light scattering measurements. This dissertation focuses on developing calibration techniques for instruments used in measuring the light absorption cross section, including both particle diameter measurements by the differential mobility analyzer (DMA) and light absorption measurements by PA. Appropriate reference materials were explored for the calibration/validation of both measurements. The light absorption of carbonaceous aerosols was also investigated to provide fundamental understanding to the absorption mechanism. The first topic of interest in this dissertation is the development of calibration nanoparticles. In this study, bionanoparticles were confirmed to be a promising reference material for particle diameter as well as ion-mobility. Experimentally, bionanoparticles demonstrated outstanding homogeneity in mobility compared to currently used calibration particles. A numerical method was developed to calculate the true distribution and to explain the broadening of measured distribution. The high stability of bionanoparticles was also confirmed. For PA measurement, three aerosol with spherical or near spherical shapes were investigated as possible candidates for a reference standard: C60, copper and silver. Comparisons were made between experimental photoacoustic absorption data with Mie theory calculations. This resulted in the identification of C60 particles with a mobility diameter of 150 nm to 400 nm as an absorbing standard at wavelengths of 405 nm and 660 nm. Copper particles with a mobility diameter of 80 nm to 300 nm are also shown to be a promising reference candidate at wavelength of 405 nm. The second topic of this dissertation focuses on the investigation of light absorption by carbonaceous particles using PA. Optical absorption spectra of size and mass selected laboratory generated aerosols consisting of black carbon (BC), BC with non-absorbing coating (ammonium sulfate and sodium chloride) and BC with a weakly absorbing coating (brown carbon derived from humic acid) were measured across the visible to near-IR (500 nm to 840 nm). The manner in which BC mixed with each coating material was investigated. The absorption enhancement of BC was determined to be wavelength dependent. Optical absorption spectra were also taken for size and mass selected smoldering smoke produced from six types of commonly seen wood in a laboratory scale apparatus.