995 resultados para NETWORK-ANALYZER CALIBRATION
Resumo:
This paper studies the energy-efficiency and service characteristics of a recently developed energy-efficient MAC protocol for wireless sensor networks in simulation and on a real sensor hardware testbed. This opportunity is seized to illustrate how simulation models can be verified by cross-comparing simulation results with real-world experiment results. The paper demonstrates that by careful calibration of simulation model parameters, the inevitable gap between simulation models and real-world conditions can be reduced. It concludes with guidelines for a methodology for model calibration and validation of sensor network simulation models.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
The important task to observe the global coverage of middle atmospheric trace gases like water vapor or ozone usually is accomplished by satellites. Climate and atmospheric studies rely upon the knowledge of trace gas distributions throughout the stratosphere and mesosphere. Many of these gases are currently measured from satellites, but it is not clear whether this capability will be maintained in the future. This could lead to a significant knowledge gap of the state of the atmosphere. We explore the possibilities of mapping middle atmospheric water vapor in the Northern Hemisphere by using Lagrangian trajectory calculations and water vapor profile data from a small network of five ground-based microwave radiometers. Four of them are operated within the frame of NDACC (Network for the Detection of Atmospheric Composition Change). Keeping in mind that the instruments are based on different hardware and calibration setups, a height-dependent bias of the retrieved water vapor profiles has to be expected among the microwave radiometers. In order to correct and harmonize the different data sets, the Microwave Limb Sounder (MLS) on the Aura satellite is used to serve as a kind of traveling standard. A domain-averaging TM (trajectory mapping) method is applied which simplifies the subsequent validation of the quality of the trajectory-mapped water vapor distribution towards direct satellite observations. Trajectories are calculated forwards and backwards in time for up to 10 days using 6 hourly meteorological wind analysis fields. Overall, a total of four case studies of trajectory mapping in different meteorological regimes are discussed. One of the case studies takes place during a major sudden stratospheric warming (SSW) accompanied by the polar vortex breakdown; a second takes place after the reformation of stable circulation system. TM cases close to the fall equinox and June solstice event from the year 2012 complete the study, showing the high potential of a network of ground-based remote sensing instruments to synthesize hemispheric maps of water vapor.
Resumo:
The important task to observe the global coverage of middle atmospheric trace gases like water vapor or ozone usually is accomplished by satellites. Climate and atmospheric studies rely upon the knowledge of trace gas distributions throughout the stratosphere and mesosphere. Many of these gases are currently measured from satellites, but it is not clear whether this capability will be maintained in the future. This could lead to a significant knowledge gap of the state of the atmosphere. We explore the possibilities of mapping middle atmospheric water vapor in the Northern Hemisphere by using Lagrangian trajectory calculations and water vapor profile data from a small network of five ground-based microwave radiometers. Four of them are operated within the frame of NDACC (Network for the Detection of Atmospheric Composition Change). Keeping in mind that the instruments are based on different hardware and calibration setups, a height-dependent bias of the retrieved water vapor profiles has to be expected among the microwave radiometers. In order to correct and harmonize the different data sets, the Microwave Limb Sounder (MLS) on the Aura satellite is used to serve as a kind of traveling standard. A domain-averaging TM (trajectory mapping) method is applied which simplifies the subsequent validation of the quality of the trajectory-mapped water vapor distribution towards direct satellite observations. Trajectories are calculated forwards and backwards in time for up to 10 days using 6 hourly meteorological wind analysis fields. Overall, a total of four case studies of trajectory mapping in different meteorological regimes are discussed. One of the case studies takes place during a major sudden stratospheric warming (SSW) accompanied by the polar vortex breakdown; a second takes place after the reformation of stable circulation system. TM cases close to the fall equinox and June solstice event from the year 2012 complete the study, showing the high potential of a network of ground-based remote sensing instruments to synthesize hemispheric maps of water vapor.
Resumo:
In the framework of ACTRIS (Aerosols, Clouds, and Trace Gases Research Infrastructure Network) summer 2012 measurement campaign (8 June–17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated in the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time, the single calculus chain (SCC) – the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products – was used. All stations sent in real-time measurements of a 1 h duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC, while the optical processing was performed in near-real time after the exercise ended. 98 and 79 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on the lidar data. The paper draws present and future SCC users' attention to the most critical parameters of the SCC product configuration and their possible optimal value but also to the limitations inherent to the raw data. The continuous use of SCC direct and derived products in heterogeneous conditions is used to demonstrate two potential applications of EARLINET infrastructure: the monitoring of a Saharan dust intrusion event and the evaluation of two dust transport models. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modeling, climate research and calibration/validation activities of spaceborne observations.
Resumo:
Application of the spectrum analyzer for illustrating several concepts associated with mobile communications is discussed. Specifically, two groups of observable features are described. First, time variation and frequency selectivity of multipath propagation can be revealed by carrying out simple measurements on commercial-network GSM and UMTS signals. Second, the main time-domain and frequency-domain features of GSM and UMTS radio signals can be observed. This constitutes a valuable tool for teaching mobile communication courses.
Resumo:
Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
The calibration coefficients of several models of cup and propeller anemometers were analysed. The analysis was based on a series of laboratory calibrations between January 2003 and August 2007. Mean and standard deviation values of calibration coefficients from the anemometers studied were included. Two calibration procedures were used and compared. In the first, recommended by the Measuring network of Wind Energy Institutes (MEASNET), 13 measurement points were taken over a wind speed range of 4 to 16 m s−1. In the second procedure, 9 measurement points were taken over a wider speed range of 4 to 23 m s−1. Results indicated no significant differences between the two calibration procedures applied to the same anemometer in terms of measured wind speed and wind turbines' Annual Energy Production (AEP). The influence of the cup anemometers' design on the calibration coefficients was also analysed. The results revealed that the slope of the calibration curve, if based on the rotation frequency and not the anemometer's output frequency, seemed to depend on the cup center rotation radius.
Resumo:
In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
This document is summarizing a major part of the work performed by the FP7-JERICO consortium, including 27 partner institutions, during 4 years (2011-2015). Its objective is to propose a strategy for the European coastal observation and monitoring. To do so we give an overview of the main achievements of the FP7-JERICO project. From this overview, gaps are analysed to draw some recommendations for the future. Overview, gaps and recommendation are addressed at both Hardware and Software levels of the JERICO Research Infrastructure. The main part of the document is built upon this analysis to outcome a general strategy for the future, giving priorities to be targeted and some possible funding mechanisms, but also upon discussions held in dedicated JERICO strategy workshops. This document was initiated in 2014 by the coordination team but considering the fact that an overview of the entire project and its achievement were needed to feed this strategy deliverable it couldn’t ended before the end of FP7-JERICO, April 2015. The preparation of the JERICO-NEXT proposal in summer 2014 to answer an H2020 call for proposals pushed the consortium ahead, fed deep thoughts about this strategy but the intention was to not propose a strategy only bounded by the JERICO-NEXT answer. Authors are conscious that writing JERICO-NEXT is even drawing a bias in the thoughts and they tried to be opened. Nevertheless, comments are always welcome to go farther ahead. Structure of the document The Chapter 3 introduces the need of sustained coastal observatories, from different point of view including a short description of the FP7-JERICO project. In Chapter 4, an analysis of the JERICO coastal observatory Hardware (platforms and sensors) in terms of Status at the end of JERICO, identified gaps and recommendations for further development is provided region by region. The main challenges that remain to be overcome is also summarized. Chapter 5 is dedicated the JERICO infrastructure Software (calibration, operation, quality assessment, data management) and the progress made through JERICO on harmonization of procedures and definition of best practices. Chapter 6 provides elements of a strategy towards sustainable and integrated coastal observations for Europe, drawing a roadmap for cost-effective scientific-based consolidation of the present infrastructure while maximizing the potential arising from JERICO in terms of innovation, wealth-creation, and business development. After reading the chapter 3, for who doesn’t know JERICO, any chapter can be read independently. More details are available in the JERICO final reports and its intermediate reports; all are available on the JERICO web site (www.jerico-FP7.eu) as well as any deliverable. Each chapter will list referring JERICO documents. A small bibliographic list is available at the end of this deliverable.