22 resultados para Sampling Time Deviation
Resumo:
This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.
Resumo:
Manipulator motion planning is a task which relies heavily on the construction of a configuration space prior to path planning. However when fast real-time motion is needed, the full construction of the manipulator's high-dimensional configu-ration space can be too slow and expensive. Alternative planning methods, which avoid this full construction of the manipulator's configuration space are needed to solve this problem. Here, one such existing local planning method for manipulators based on configuration-sampling and subgoal-selection has been extended. Using a modified Artificial Potential Fields (APF) function, goal-configuration sampling and a novel subgoal selection method, it provides faster, more optimal paths than the previously proposed work. Simulation results show a decrease in both runtime and path lengths, along with a decrease in unexpected local minimum and crashing issues.
Resumo:
BACKGROUND: PET/CT scanning can determine suitability for curative therapy and inform decision making when considering radical therapy in patients with non-small cell lung cancer (NSCLC). Metastases to central mediastinal lymph nodes (N2) may alter such management decisions. We report a 2 year retrospective series assessing N2 lymph node staging accuracy with PET/CT compared to pathological analysis at surgery.
METHODS: Patients with NSCLC attending our centre (excluding those who had induction chemotherapy) who had staging PET/CT scans and pathological nodal sampling between June 2006 and June 2008 were analysed. For each lymph node assessed pathologically, the corresponding PET/CT status was determined. 64 patients with 200 N2 lymph nodes were analysed.
RESULTS: Sensitivity of PET/CT scans for indentifying involved N2 lymph nodes was
39%, specificity 96% and overall accuracy 90%. For individual lymph node analysis, logistic regression demonstrated a significant linear association between PET/CT sensitivity and time from scanning to surgery (p=0.031) but not for specificity and accuracy. Those scanned <9 weeks before pathological sampling were significantly more sensitive (64% >9 weeks, 0% ≥ 9 weeks, p=0.013) and more accurate (94% <9 weeks, 81% ≥ 9 weeks, p=0.007). Differences in specificity were not seen (97% <9 weeks, 91% ≥ 9 weeks, p=0.228). No significant difference in specificity was found at any time point.
CONCLUSIONS: We recommend that if a PET/CT scan is older than 9 weeks, and management would be altered by the presence of N2 nodes, re-staging of the
mediastinum should be undertaken.
Resumo:
The finite difference time domain (FDTD) method has direct applications in musical instrument modeling, simulation of environmental acoustics, room acoustics and sound reproduction paradigms, all of which benefit from auralization. However, rendering binaural impulse responses from simulated
data is not straightforward to accomplish as the calculated pressure at FDTD grid nodes does not contain any directional information. This paper addresses this issue by introducing a spherical array to capture sound pressure on a finite difference grid, and decomposing it into a plane-wave density
function. Binaural impulse responses are then constructed in the spherical harmonics domain by combining the decomposed grid data with free field head-related transfer functions. The effects of designing a spherical array in a Cartesian grid are studied, and emphasis is given to the relationships
between array sampling and the spatial and spectral design parameters of several finite-difference
schemes.
Resumo:
Plasma etch is a key process in modern semiconductor manufacturing facilities as it offers process simplification and yet greater dimensional tolerances compared to wet chemical etch technology. The main challenge of operating plasma etchers is to maintain a consistent etch rate spatially and temporally for a given wafer and for successive wafers processed in the same etch tool. Etch rate measurements require expensive metrology steps and therefore in general only limited sampling is performed. Furthermore, the results of measurements are not accessible in real-time, limiting the options for run-to-run control. This paper investigates a Virtual Metrology (VM) enabled Dynamic Sampling (DS) methodology as an alternative paradigm for balancing the need to reduce costly metrology with the need to measure more frequently and in a timely fashion to enable wafer-to-wafer control. Using a Gaussian Process Regression (GPR) VM model for etch rate estimation of a plasma etch process, the proposed dynamic sampling methodology is demonstrated and evaluated for a number of different predictive dynamic sampling rules. © 2013 IEEE.
Resumo:
Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.
Resumo:
A new heuristic based on Nawaz–Enscore–Ham (NEH) algorithm is proposed for solving permutation flowshop scheduling problem in this paper. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion for the objective of minimizing both makespan and machine idle-time. Statistical tests illustrate better solution quality of the proposed algorithm, comparing to existing benchmark heuristics.