950 resultados para Model accuracy
Resumo:
Hydrographers have traditionally referred to the nearshore area as the "white ribbon" area due to the challenges associated with the collection of elevation data in this highly dynamic transitional zone between terrestrial and marine environments. Accordingly, available information in this zone is typically characterised by a range of datasets from disparate sources. In this paper we propose a framework to 'fill' the white ribbon area of a coral reef system by integrating multiple elevation and bathymetric datasets acquired by a suite of remote-sensing technologies into a seamless digital elevation model (DEM). A range of datasets are integrated, including field-collected GPS elevation points, terrestrial and bathymetric LiDAR, single and multibeam bathymetry, nautical chart depths and empirically derived bathymetry estimations from optical remote sensing imagery. The proposed framework ranks data reliability internally, thereby avoiding the requirements to quantify absolute error and results in a high resolution, seamless product. Nested within this approach is an effective spatially explicit technique for improving the accuracy of bathymetry estimates derived empirically from optical satellite imagery through modelling the spatial structure of residuals. The approach was applied to data collected on and around Lizard Island in northern Australia. Collectively, the framework holds promise for filling the white ribbon zone in coastal areas characterised by similar data availability scenarios. The seamless DEM is referenced to the horizontal coordinate system MGA Zone 55 - GDA 1994, mean sea level (MSL) vertical datum and has a spatial resolution of 20 m.
Resumo:
Orbital tuning of benthic d18O is a common approach for assigning ages to ocean sediment records. Similar environmental forcing of the northern South China Sea and the southeast Asian cave regions allows for transfer of the speleothem d18O radiometric chronology to the planktonic and benthic d18O records from Ocean Drilling Program Site 1146, yielding a new chronology with 41 radiometrically calibrated datums, spanning the past 350 kyr. This approach also provides for an independent assessment of the accuracy of the orbitally tuned benthic d18O chronology for the last 350 kyr. The largest differences relative to the latest chronology occur in marine isotope stages (MIS) 5.4, 5.5, 6, 7, and 9.3. Prominent suborbital-scale structure believed to be global in nature is identified within MIS 5.4 and MIS 7.2. On the basis of the radiometrically calibrated chronology, the time constant of the ice sheet is found to be 5.4 kyr at the precession band (light d18O lags precession minima by -55.4°) and 10.4 kyr at the obliquity band (light d18O lags obliquity maxima by 57.4°). These values are significantly shorter than the single 17 kyr time constant originally estimated by Imbrie et al. (1984), based primarily on the timing of terminations I and II and the 15 kyr time constant used by Lisiecki and Raymo (2005, doi:10.1029/2004PA001071).
Resumo:
The world's largest fossil oyster reef, formed by the giant oyster Crassostrea gryphoides and located in Stetten (north of Vienna, Austria) is studied by Harzhauser et al., 2015, 2016; Djuricic et al., 2016. Digital documentation of the unique geological site is provided by terrestrial laser scanning (TLS) at the millimeter scale. Obtaining meaningful results is not merely a matter of data acquisition with a suitable device; it requires proper planning, data management, and postprocessing. Terrestrial laser scanning technology has a high potential for providing precise 3D mapping that serves as the basis for automatic object detection in different scenarios; however, it faces challenges in the presence of large amounts of data and the irregular geometry of an oyster reef. We provide a detailed description of the techniques and strategy used for data collection and processing in Djuricic et al., 2016. The use of laser scanning provided the ability to measure surface points of 46,840 (estimated) shells. They are up to 60-cm-long oyster specimens, and their surfaces are modeled with a high accuracy of 1 mm. In addition to laser scanning measurements, more than 300 photographs were captured, and an orthophoto mosaic was generated with a ground sampling distance (GSD) of 0.5 mm. This high-resolution 3D information and the photographic texture serve as the basis for ongoing and future geological and paleontological analyses. Moreover, they provide unprecedented documentation for conservation issues at a unique natural heritage site.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Purpose: Although manufacturers of bicycle power monitoring devices SRM and Power Tap (PT) claim accuracy to within 2.5%, there are limited scientific data available in support. The purpose of this investigation was to assess the accuracy of SRM and PT under different conditions. Methods: First, 19 SRM were calibrated, raced for 11 months, and retested using a dynamic CALRIG (50-1000 W at 100 rpm). Second, using the same procedure, five PT were repeat tested on alternate days. Third, the most accurate SRM and PT were tested for the influence of cadence (60, 80, 100, 120 rpm), temperature (8 and 21degreesC) and time (1 h at similar to300 W) on accuracy. Finally, the same SRM and PT were downloaded and compared after random cadence and gear surges using the CALRIG and on a training ride. Results: The mean error scores for SRM and PT factory calibration over a range of 50-1000 W were 2.3 +/- 4.9% and -2.5 +/- 0.5%, respectively. A second set of trials provided stable results for 15 calibrated SRM after 11 months (-0.8 +/- 1.7%), and follow-up testing of all PT units confirmed these findings (-2.7 +/- 0.1%). Accuracy for SRM and PT was not largely influenced by time and cadence; however. power output readings were noticeably influenced by temperature (5.2% for SRM and 8.4% for PT). During field trials, SRM average and max power were 4.8% and 7.3% lower, respectively, compared with PT. Conclusions: When operated according to manufacturers instructions, both SRM and PT offer the coach, athlete, and sport scientist the ability to accurately monitor power output in the lab and the field. Calibration procedures matching performance tests (duration, power, cadence, and temperature) are, however, advised as the error associated with each unit may vary.
Resumo:
In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990–2000) period. Results from both pricingerror and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other.
Resumo:
The aim of the study presented was to implement a process model to simulate the dynamic behaviour of a pilot-scale process for anaerobic two-stage digestion of sewage sludge. The model implemented was initiated to support experimental investigations of the anaerobic two-stage digestion process. The model concept implemented in the simulation software package MATLAB(TM)/Simulink(R) is a derivative of the IWA Anaerobic Digestion Model No.1 (ADM1) that has been developed by the IWA task group for mathematical modelling of anaerobic processes. In the present study the original model concept has been adapted and applied to replicate a two-stage digestion process. Testing procedures, including balance checks and 'benchmarking' tests were carried out to verify the accuracy of the implementation. These combined measures ensured a faultless model implementation without numerical inconsistencies. Parameters for both, the thermophilic and the mesophilic process stage, have been estimated successfully using data from lab-scale experiments described in literature. Due to the high number of parameters in the structured model, it was necessary to develop a customised procedure that limited the range of parameters to be estimated. The accuracy of the optimised parameter sets has been assessed against experimental data from pilot-scale experiments. Under these conditions, the model predicted reasonably well the dynamic behaviour of a two-stage digestion process in pilot scale. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This paper describes a biventricular model, which couples the electrical and mechanical properties of the heart, and computer simulations of ventricular wall motion and deformation by means of a biventricular model. In the constructed electromechanical model, the mechanical analysis was based on composite material theory and the finite-element method; the propagation of electrical excitation was simulated using an electrical heart model, and the resulting active forces were used to calculate ventricular wall motion. Regional deformation and Lagrangian strain tensors were calculated during the systole phase. Displacements, minimum principal strains and torsion angle were used to describe the motion of the two ventricles. The simulations showed that during the period of systole, (1) the right ventricular free wall moves towards the septum, and at the same time, the base and middle of the free wall move towards the apex, which reduces the volume of the right ventricle; the minimum principle strain (E3) is largest at the apex, then at the middle of the free wall and its direction is in the approximate direction of the epicardial muscle fibres; (2) the base and middle of the left ventricular free wall move towards the apex and the apex remains almost static; the torsion angle is largest at the apex; the minimum principle strain E3 is largest at the apex and its direction on the surface of the middle wall of the left ventricle is roughly in the fibre orientation. These results are in good accordance with results obtained from MR tagging images reported in the literature. This study suggests that such an electromechanical biventricular model has the potential to be used to assess the mechanical function of the two ventricles, and also could improve the accuracy ECG simulation when it is used in heart torso model-based body surface potential simulation studies.
Resumo:
Hysteresis models that eliminate the artificial pumping errors associated with the Kool-Parker (KP) soil moisture hysteresis model, such as the Parker-Lenhard (PL) method, can be computationally demanding in unsaturated transport models since they need to retain the wetting-drying history of the system. The pumping errors in these models need to be eliminated for correct simulation of cyclical systems (e.g. transport above a tidally forced watertable, infiltration and redistribution under periodic irrigation) if the soils exhibit significant hysteresis. A modification is made here to the PL method that allows it to be more readily applied to numerical models by eliminating the need to store a large number of soil moisture reversal points. The modified-PL method largely eliminates any artificial pumping error and so essentially retains the accuracy of the original PL approach. The modified-PL method is implemented in HYDRUS-1D (version 2.0), which is then used to simulate cyclic capillary fringe dynamics to show the influence of removing artificial pumping errors and to demonstrate the ease of implementation. Artificial pumping errors are shown to be significant for the soils and system characteristics used here in numerical experiments of transport above a fluctuating watertable. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Objectives: In this paper, we present a unified electrodynamic heart model that permits simulations of the body surface potentials generated by the heart in motion. The inclusion of motion in the heart model significantly improves the accuracy of the simulated body surface potentials and therefore also the 12-lead ECG. Methods: The key step is to construct an electromechanical heart model. The cardiac excitation propagation is simulated by an electrical heart model, and the resulting cardiac active forces are used to calculate the ventricular wall motion based on a mechanical model. The source-field point relative position changes during heart systole and diastole. These can be obtained, and then used to calculate body surface ECG based on the electrical heart-torso model. Results: An electromechanical biventricular heart model is constructed and a standard 12-lead ECG is simulated. Compared with a simulated ECG based on the static electrical heart model, the simulated ECG based on the dynamic heart model is more accordant with a clinically recorded ECG, especially for the ST segment and T wave of a V1-V6 lead ECG. For slight-degree myocardial ischemia ECG simulation, the ST segment and T wave changes can be observed from the simulated ECG based on a dynamic heart model, while the ST segment and T wave of simulated ECG based on a static heart model is almost unchanged when compared with a normal ECG. Conclusions: This study confirms the importance of the mechanical factor in the ECG simulation. The dynamic heart model could provide more accurate ECG simulation, especially for myocardial ischemia or infarction simulation, since the main ECG changes occur at the ST segment and T wave, which correspond with cardiac systole and diastole phases.
Resumo:
Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.
Resumo:
This paper presents a forecasting technique for forward electricity/gas prices, one day ahead. This technique combines a Kalman filter (KF) and a generalised autoregressive conditional heteroschedasticity (GARCH) model (often used in financial forecasting). The GARCH model is used to compute next value of a time series. The KF updates parameters of the GARCH model when the new observation is available. This technique is applied to real data from the UK energy markets to evaluate its performance. The results show that the forecasting accuracy is improved significantly by using this hybrid model. The methodology can be also applied to forecasting market clearing prices and electricity/gas loads.
Resumo:
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.
Resumo:
Ophthalmophakometric measurements of ocular surface radius of curvature and alignment were evaluated on physical model eyes encompassing a wide range of human ocular dimensions. The results indicated that defocus errors arising from imperfections in the ophthalmophakometer camera telecentricity and light source collimation were smaller than experimental errors. Reasonable estimates emerged for anterior lens surface radius of curvature (accuracy: 0.02–0.10 mm; precision 0.05–0.09 mm), posterior lens surface radius of curvature (accuracy: 0.10–0.55 mm; precision 0.06–0.20 mm), eye rotation (accuracy: 0.00–0.32°; precision 0.06–0.25°), lens tilt (accuracy: 0.00–0.33°; precision 0.05–0.98°) and lens decentration (accuracy: 0.00–0.07 mm; precision 0.00–0.07 mm).
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We have previously shown how such distortions in perceived edge blur may be explained by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. To test this model further, we measured the sharpening of drifting, periodic patterns over a large range of contrasts, blur widths, and speeds Human Vision. The results indicate that, while sharpening increased with speed, it was practically invariant with contrast. This contrast invariance cannot be explained by a fixed compressive nonlinearity since that predicts almost no sharpening at low contrasts.We show by computational modelling of spatiotemporal responses that, if a dynamic contrast gain control precedes the static nonlinear transducer, then motion sharpening, its speed dependence, and its invariance with contrast can be predicted with reasonable accuracy.