80 resultados para Error correction model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: We aimed to determine the prevalence and associations of refractive error on Norfolk Island. DESIGN: Population-based study on Norfolk Island, South Pacific. PARTICIPANTS: All permanent residents on Norfolk Island aged ≥ 15 years were invited to participate. METHODS: Patients underwent non-cycloplegic autorefraction, slit-lamp biomicroscope examination and biometry assessment. Only phakic eyes were analysed. MAIN OUTCOME MEASURES: Prevalence and multivariate associations of refractive error and myopia. RESULTS: There were 677 people (645 right phakic eyes, 648 left phakic eyes) aged ≥ 15 years were included in this study. Mean age of participants was 51.1 (standard deviation 15.7; range 15-81). Three hundred and seventy-six people (55.5%) were female. Adjusted to the 2006 Norfolk Island population, prevalence estimates of refractive error were as follows: myopia (mean spherical equivalent ≥ -1.0 D) 10.1%, hypermetropia (mean spherical equivalent ≥ 1.0 D) 36.6%, and astigmatism 17.7%. Significant independent predictors of myopia in the multivariate model were lower age (P < 0.001), longer axial length (P < 0.001), shallower anterior chamber depth (P = 0.031) and increased corneal curvature (P < 0.001). Significant independent predictors of refractive error were increasing age (P < 0.001), male gender (P = 0.009), Pitcairn ancestry (P = 0.041), cataract (P < 0.001), longer axial length (P < 0.001) and decreased corneal curvature (P < 0.001). CONCLUSIONS: The prevalence of myopia on Norfolk Island is lower than on mainland Australia, and the Norfolk Island population demonstrates ethnic differences in the prevalence estimates. Given the significant associations between refractive error and several ocular biometry characteristics, Norfolk Island may be a useful population in which to find the genetic basis of refractive error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most studies examining the temperature–mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: In animal models hemi-field deprivation results in localised, graded vitreous chamber elongation and presumably deprivation induced localised changes in retinal processing. The aim of this research was to determine if there are variations in ERG responses across the retina in normal chick eyes and to examine the effect of hemi-field and full-field deprivation on ERG responses across the retina and at earlier times than have previously been examined electrophysiologically. Methods: Chicks were either untreated, wore monocular full-diffusers or half-diffusers (depriving nasal retina) (n = 6-8 each group) from day 8. mfERG responses were measured using the VERIS mfERG system across the central 18.2º× 16.7º (H × V) field. The stimulus consisted of 61 unscaled hexagons with each hexagon modulated between black and white according to a pseudorandom binary m-sequence. The mfERG was measured on day 12 in untreated chicks, following 4 days of hemi-field diffuser wear, and 2, 48 and 96 h after application of full-field diffusers. Results: The ERG response of untreated chick eyes did not vary across the measured field; there was no effect of retinal location on the N1-P1 amplitude (p = 0.108) or on P1 implicit time (p > 0.05). This finding is consistent with retinal ganglion cell density of the chick varying by only a factor of two across the entire retina. Half-diffusers produced a ramped retina and a graded effect of negative lens correction (p < 0.0001); changes in retinal processing were localized. The untreated retina showed increasing complexity of the ERG waveform with development; form-deprivation prevented the increasing complexity of the response at the 2, 48 and 96 h measurement times and produced alterations in response timing. Conclusions: Form-deprivation and its concomitant loss of image contrast and high spatial frequency images prevented development of the ERG responses, consistent with a disruption of development of retinal feedback systems. The characterisation of ERG responses in normal and deprived chick eyes across the retina allows the assessment of concurrent visual and retinal manipulations in this model. (Ophthalmic & Physiological Optics © 2013 The College of Optometrists.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Travel time estimation and prediction on motorways has long been a topic of research. Prediction modeling generally assumes that the estimation is perfect. No matter how good is the prediction modeling- the errors in estimation can significantly deteriorate the accuracy and reliability of the prediction. Models have been proposed to estimate travel time from loop detector data. Generally, detectors are closely spaced (say 500m) and travel time can be estimated accurately. However, detectors are not always perfect, and even during normal running conditions few detectors malfunction, resulting in increase in the spacing between the functional detectors. Under such conditions, error in the travel time estimation is significantly large and generally unacceptable. This research evaluates the in-practice travel time estimation model during different traffic conditions. It is observed that the existing models fail to accurately estimate travel time during large detector spacing and congestion shoulder periods. Addressing this issue, an innovative Hybrid model that only considers loop data for travel time estimation is proposed. The model is tested using simulation and is validated with real Bluetooth data from Pacific Motorway Brisbane. Results indicate that during non free flow conditions and larger detector spacing Hybrid model provides significant improvement in the accuracy of travel time estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parabolic trough concentrator collector is the most matured, proven and widespread technology for the exploitation of the solar energy on a large scale for middle temperature applications. The assessment of the opportunities and the possibilities of the collector system are relied on its optical performance. A reliable Monte Carlo ray tracing model of a parabolic trough collector is developed by using Zemax software. The optical performance of an ideal collector depends on the solar spectral distribution and the sunshape, and the spectral selectivity of the associated components. Therefore, each step of the model, including the spectral distribution of the solar energy, trough reflectance, glazing anti-reflection coating and the absorber selective coating is explained and verified. Radiation flux distribution around the receiver, and the optical efficiency are two basic aspects of optical simulation are calculated using the model, and verified with widely accepted analytical profile and measured values respectively. Reasonably very good agreement is obtained. Further investigations are carried out to analyse the characteristics of radiation distribution around the receiver tube at different insolation, envelop conditions, and selective coating on the receiver; and the impact of scattered light from the receiver surface on the efficiency. However, the model has the capability to analyse the optical performance at variable sunshape, tracking error, collector imperfections including absorber misalignment with focal line and de-focal effect of the absorber, different rim angles, and geometric concentrations. The current optical model can play a significant role in understanding the optical aspects of a trough collector, and can be employed to extract useful information on the optical performance. In the long run, this optical model will pave the way for the construction of low cost standalone photovoltaic and thermal hybrid collector in Australia for small scale domestic hot water and electricity production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. We develop a sheep thoracic spine interbody fusion model to study the suitability of polycaprolactone-based scaffold and recombinant human bone morphogenetic protein-2 (rhBMP-2) as a bone graft substitute within the thoracic spine. The surgical approach is a mini- open thoracotomy with relevance to minimally invasive deformity correction surgery for adolescent idiopathic scoliosis. To date there are no studies examining the use of this biodegradable implant in combination with biologics in a sheep thoracic spine model. METHODS. In the present study, six sheep underwent a 3-level (T6/7, T8/9 and T10/11) discectomy with randomly allocated implantation of a different graft substitute at each of the three levels; (i) calcium phosphate (CaP) coated polycaprolactone-based scaffold plus 0.54μg rhBMP-2, (ii) CaP coated PCL- based scaffold alone or (iii) autograft (mulched rib head). Fusion was assessed at six months post-surgery. RESULTS. Computed Tomographic scanning demonstrated higher fusion grades in the rhBMP-2 plus PCL- based scaffold group in comparison to either PCL-based scaffold alone or autograft. These results were supported by histological evaluations of the respective groups. Biomechanical testing revealed significantly higher stiffness for the rhBMP-2 plus PCL- based scaffold group in all loading directions in comparison to the other two groups. CONCLUSION. The results of this study demonstrate that rhBMP-2 plus PCL- based scaffold is a viable bone graft substitute, providing an optimal environment for thoracic interbody spinal fusion in a large animal model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian networks (BNs) are graphical probabilistic models used for reasoning under uncertainty. These models are becoming increasing popular in a range of fields including ecology, computational biology, medical diagnosis, and forensics. In most of these cases, the BNs are quantified using information from experts, or from user opinions. An interest therefore lies in the way in which multiple opinions can be represented and used in a BN. This paper proposes the use of a measurement error model to combine opinions for use in the quantification of a BN. The multiple opinions are treated as a realisation of measurement error and the model uses the posterior probabilities ascribed to each node in the BN which are computed from the prior information given by each expert. The proposed model addresses the issues associated with current methods of combining opinions such as the absence of a coherent probability model, the lack of the conditional independence structure of the BN being maintained, and the provision of only a point estimate for the consensus. The proposed model is applied an existing Bayesian Network and performed well when compared to existing methods of combining opinions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerosol deposition in cylindrical tubes is a subject of interest to researchers and engineers in many applications of aerosol physics and metrology. Investigation of nano-particles in different aspects such as lungs, upper airways, batteries and vehicle exhaust gases is vital due the smaller size, adverse health effect and higher trouble for trapping than the micro-particles. The Lagrangian particle tracking provides an effective method for simulating the deposition of nano-particles as well as micro-particles as it accounts for the particle inertia effect as well as the Brownian excitation. However, using the Lagrangian approach for simulating ultrafine particles has been limited due to computational cost and numerical difficulties. In this paper, the deposition of nano-particles in cylindrical tubes under laminar condition is studied using the Lagrangian particle tracking method. The commercial Fluent software is used to simulate the fluid flow in the pipes and to study the deposition and dispersion of nano-particles. Different particle diameters as well as different flow rates are examined. The point analysis in a uniform flow is performed for validating the Brownian motion. The results show good agreement between the calculated deposition efficiency and the analytic correlations in the literature. Furthermore, for the nano-particles with the diameter more than 40 nm, the calculated deposition efficiency by the Lagrangian method is less than the analytic correlations based on Eulerian method due to statistical error or the inertia effect.