950 resultados para accuracy of estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study was conducted to investigate the relationships between body size estimations and disordered eating symptomatology. The method of constant stimuli was used to derive three measures of self-perceived body size in 93 women: (1) accuracy of body size estimations (body image distortion); (2) sensitivity in discriminating body size within blocks of trials (body image sensitivity); and (3) variability in making body size estimations between blocks of trials (body image variability). Participants also completed measures of disordered eating. Although body image distortion correlated with dietary restraint and eating concern, body image variability accounted for additional variance in these variables, as well as variance in binge eating. The relationships involving body image variability were found to be mediated by body dissatisfaction and internalization of the thin ideal. Together, these results are consistent with the proposition that body image variability is a significant factor in disordered eating.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of dimensional defects in aluminum die-castings is widespread throughout the foundry industry and their detection is of paramount importance in maintaining product quality. Due to the unpredictable factory environment and metallic with highly reflective nature, it is extremely hard to estimate true dimensionality of these metallic parts, autonomously. Some existing vision systems are capable of estimating depth to high accuracy, however are very much hardware dependent, involving the use of light and laser pattern projectors, integrated into vision systems or laser scanners. However, due to the reflective nature of these metallic parts and variable factory environments, the aforementioned vision systems tend to exhibit unpromising performance. Moreover, hardware dependency makes these systems cumbersome and costly. In this work, we propose a novel robust 3D reconstruction algorithm capable of reconstructing dimensionally accurate 3D depth models of the aluminum die-castings. The developed system is very simple and cost effective as it consists of only a pair of stereo cameras and a defused fluorescent light. The proposed vision system is capable of estimating surface depths within the accuracy of 0.5mm. In addition, the system is invariant to illuminative variations as well as orientation and location of the objects on the input image space, making the developed system highly robust. Due to its hardware simplicity and robustness, it can be implemented in different factory environments without a significant change in the setup. The proposed system is a major part of quality inspection system for the automotive manufacturing industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tungsten carbide coating on the integrated platform of a transversely heated graphite atomizer was used as a modifier for the direct determination of Se in soil extracts by graphite furnace atomic absorption spectrometry. Diethylenetriaminepentaacetic acid (0.0050 mol L-1) plus ammonium hydrogencarbonate (1.0 mol L-1) extracted predominantly available inorganic selenate from soil. The formation of a large amount of carbonaceous residue inside the atomizer was avoided with a first pyrolysis step at 600 degreesC assisted by air during 30 s. For 20 muL of soil extracts delivered to the atomizer and calibration by matrix matching, an analytical curve (10.0-100 mug of L-1) with good linear correlation (r = 0.999) between integrated absorbance and analyte concentration was established. The characteristic mass was similar to63 pg of Se, and the lifetime of the tube was similar to750 firings. The limit of detection was 1.6 mug L-1, and the relative standard deviations (n = 12) were typically <4% for a soil extract containing 50 mug of L-1. The accuracy of the determination of Se was checked for soil samples by means of addition/recovery tests. Recovery data of Se added to four enriched soil samples varied from 80 to 90% and indicated an accurate method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study was to determine the effects of motor practice on visual judgments of apertures for wheelchair locomotion and the visual control of wheelchair locomotion in wheelchair users who had no prior experience. Sixteen young adults, divided into motor practice and control groups, visually judged varying apertures as passable or impassable under walking, pre-practice, and post-practice conditions. The motor practice group underwent additional motor practice in 10 blocks of five trials each, moving the wheelchair through different apertures. The relative perceptual boundary was determined based on judgment data and kinematic variables that were calculated from videos of the motor practice trials. The participants overestimated the space needed under the walking condition and underestimated it under the wheelchair conditions, independent of group. The accuracy of judgments improved from the pre-practice to post-practice condition in both groups. During motor practice, the participants adaptively modulated wheelchair locomotion, adjusting it to the apertures available. The present findings from a priori visual judgments of space and the continuous judgments that are necessary for wheelchair approach and passage through apertures appear to support the dissociation between processes of perception and action.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results: The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows'Cp statistics, p value and r(2), 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions: The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The aim of this work is to propose a new method for estimating the backward flow directly from the optical flow. We assume that the optical flow has already been computed and we need to estimate the inverse mapping. This mapping is not bijective due to the presence of occlusions and disocclusions, therefore it is not possible to estimate the inverse function in the whole domain. Values in these regions has to be guessed from the available information. We propose an accurate algorithm to calculate the backward flow uniquely from the optical flow, using a simple relation. Occlusions are filled by selecting the maximum motion and disocclusions are filled with two different strategies: a min-fill strategy, which fills each disoccluded region with the minimum value around the region; and a restricted min-fill approach that selects the minimum value in a close neighborhood. In the experimental results, we show the accuracy of the method and compare the results using these two strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1998-2001 Finland suffered the most severe insect outbreak ever recorded, over 500,000 hectares. The outbreak was caused by the common pine sawfly (Diprion pini L.). The outbreak has continued in the study area, Palokangas, ever since. To find a good method to monitor this type of outbreaks, the purpose of this study was to examine the efficacy of multi-temporal ERS-2 and ENVISAT SAR imagery for estimating Scots pine (Pinus sylvestris L.) defoliation. Three methods were tested: unsupervised k-means clustering, supervised linear discriminant analysis (LDA) and logistic regression. In addition, I assessed if harvested areas could be differentiated from the defoliated forest using the same methods. Two different speckle filters were used to determine the effect of filtering on the SAR imagery and subsequent results. The logistic regression performed best, producing a classification accuracy of 81.6% (kappa 0.62) with two classes (no defoliation, >20% defoliation). LDA accuracy was with two classes at best 77.7% (kappa 0.54) and k-means 72.8 (0.46). In general, the largest speckle filter, 5 x 5 image window, performed best. When additional classes were added the accuracy was usually degraded on a step-by-step basis. The results were good, but because of the restrictions in the study they should be confirmed with independent data, before full conclusions can be made that results are reliable. The restrictions include the small size field data and, thus, the problems with accuracy assessment (no separate testing data) as well as the lack of meteorological data from the imaging dates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scaphoid is the most frequently fractured carpal bone. When investigating fixation stability, which may influence healing, knowledge of forces and moments acting on the scaphoid is essential. The aim of this study was to evaluate cartilage contact forces acting on the intact scaphoid in various functional wrist positions using finite element modeling. A novel methodology was utilized as an attempt to overcome some limitations of earlier studies, namely, relatively coarse imaging resolution to assess geometry, assumption of idealized cartilage thicknesses and neglected cartilage pre-stresses in the unloaded joint. Carpal bone positions and articular cartilage geometry were obtained independently by means of high resolution CT imaging and incorporated into finite element (FE) models of the human wrist in eight functional positions. Displacement driven FE analyses were used to resolve inter-penetration of cartilage layers, and provided contact areas, forces and pressure distribution for the scaphoid bone. The results were in the range reported by previous studies. Novel findings of this study were: (i) cartilage thickness was found to be heterogeneous for each bone and vary considerably between carpal bones; (ii) this heterogeneity largely influenced the FE results and (iii) the forces acting on the scaphoid in the unloaded wrist were found to be significant. As major limitations, accuracy of the method was found to be relatively low, and the results could not be compared to independent experiments. The obtained results will be used in a following study to evaluate existing and recently developed screws used to fix scaphoid fractures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction The aim of this study was to determine which single measurement on post-mortem cardiac MR reflects actual heart weight as measured at autopsy, assess the intra- and inter-observer reliability of MR measurements, derive a formula to predict heart weight from MR measurements and test the accuracy of the formula to prospectively predict heart weight. Materials and methods 53 human cadavers underwent post-mortem cardiac MR and forensic autopsy. In Phase 1, left ventricular area and wall thickness were measured on short axis and four chamber view images of 29 cases. All measurements were correlated to heart weight at autopsy using linear regression analysis. In Phase 2, single left ventricular area measurements on four chamber view images (LVA_4C) from 24 cases were used to predict heart weight at autopsy based on equations derived during Phase 1. Intra-class correlation coefficient (ICC) was used to determine inter- and intra-reader agreement. Results Heart weight strongly correlates with LVA_4C (r=0.78 M; p<0.001). Intra-reader and inter-reader reliability was excellent for LVA_4C (ICC=0.81–0.91; p<0.001 and ICC=0.90; p<0.001 respectively). A simplified formula for heart weight ([g]≈LVA_4C [mm2]×0.11) was derived based on linear regression analysis. Conclusions This study shows that single circumferential area measurements of the left ventricle in the four chamber view on post-mortem cardiac MR reflect actual heart weight as measured at autopsy. These measurements yield an excellent intra- and inter-reader reliability and can be used to predict heart weight prior to autopsy or to give a reasonable estimate of heart weight in cases where autopsy is not performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.