939 resultados para robust estimation statistics
Resumo:
Traditional methods of submerged aquatic vegetation (SAV) survey last long and then, they are high cost. Optical remote sensing is an alternative, but it has some limitations in the aquatic environment. The use of echosounder techniques is efficient to detect submerged targets. Therefore, the aim of this study is to evaluate different kinds of interpolation approach applied on SAV sample data collected by echosounder. This study case was performed in a region of Uberaba River - Brazil. The interpolation methods evaluated in this work follow: Nearest Neighbor, Weighted Average, Triangular Irregular Network (TIN) and ordinary kriging. Better results were carried out with kriging interpolation. Thus, it is recommend the use of geostatistics for spatial inference of SAV from sample data surveyed with echosounder techniques. © 2012 IEEE.
Resumo:
Incluye Bibliografía
Resumo:
Parametric VaR (Value-at-Risk) is widely used due to its simplicity and easy calculation. However, the normality assumption, often used in the estimation of the parametric VaR, does not provide satisfactory estimates for risk exposure. Therefore, this study suggests a method for computing the parametric VaR based on goodness-of-fit tests using the empirical distribution function (EDF) for extreme returns, and compares the feasibility of this method for the banking sector in an emerging market and in a developed one. The paper also discusses possible theoretical contributions in related fields like enterprise risk management (ERM). © 2013 Elsevier Ltd.
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
The purpose of this paper was to evaluate the agreement among different methods used to estimate angular deviation of the body to determine the risk for development of upper limb musculoskeletal disorders in dentistry undergraduates. Materials and Methods: Students (n = 79) enrolled in the final year undergraduate course of the Araraquara School of Dentistry-Sγo Paulo State University-UNESP were evaluated. Photographs were taken of students performing clinical procedures. The work postures adopted by each student were evaluated by means of rapid upper limb assessment (RULA). The basis used to obtain the individual's final risk score is the measurement of the angular deviations in the neutral positions of the regions evaluated. Two methods were used to estimate the angular deviation of the body: Visual exam and Image Tool software. A RULA final risk score was attributed to each procedure the student performed (n = 333). Study of the agreement between the methods about risk of musculoskeletal disorders was conducted by means of Kappa (κ) statistics. The level of significance adopted was 5%. Results: Fair agreement (κ = 0.32) between the evaluated methods was verified. Conclusion: The risk for development of upper limb musculoskeletal disorders by dentistry undergraduates evaluated by using RULA was not in agreement with the results obtained by use of visual exam and Image Tool.
Resumo:
The purpose of this paper was to evaluate the agreement among different methods used to estimate angular deviation of the body to determine the risk for development of upper limb musculoskeletal disorders in dentistry undergraduates. Materials and Methods: Students (n = 79) enrolled in the final year undergraduate course of the Araraquara School of Dentistry-Sγo Paulo State University-UNESP were evaluated. Photographs were taken of students performing clinical procedures. The work postures adopted by each student were evaluated by means of rapid upper limb assessment (RULA). The basis used to obtain the individual's final risk score is the measurement of the angular deviations in the neutral positions of the regions evaluated. Two methods were used to estimate the angular deviation of the body: Visual exam and Image Tool software. A RULA final risk score was attributed to each procedure the student performed (n = 333). Study of the agreement between the methods about risk of musculoskeletal disorders was conducted by means of Kappa (κ) statistics. The level of significance adopted was 5%. Results: Fair agreement (κ = 0.32) between the evaluated methods was verified. Conclusion: The risk for development of upper limb musculoskeletal disorders by dentistry undergraduates evaluated by using RULA was not in agreement with the results obtained by use of visual exam and Image Tool.
Resumo:
Conselho Nacional de Desenvolvimento Cientifico e Tecnológico (CNPq)
Resumo:
Background: Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results: The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows'Cp statistics, p value and r(2), 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions: The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed.
Resumo:
[EN] This article describes an implementation of the optical flow estimation method introduced by Zach, Pock and Bischof. This method is based on the minimization of a functional containing a data term using the L norm and a regularization term using the total variation of the flow. The main feature of this formulation is that it allows discontinuities in the flow field, while being more robust to noise than the classical approach. The algorithm is an efficient numerical scheme, which solves a relaxed version of the problem by alternate minimization.
Resumo:
[EN] In this paper we show that a classic optical flow technique by Nagel and Enkelmann can be regarded as an early anisotropic diffusion method with a diffusion tensor. We introduce three improvements into the model formulation that avoid inconsistencies caused by centering the brightness term and the smoothness term in different images use a linear scale-space focusing strategy from coarse to fine scales for avoiding convergence to physically irrelevant local minima, and create an energy functional that is invariant under linear brightness changes. Applying a gradient descent method to the resulting energy functional leads to a system of diffusion-reaction equations. We prove that this system has a unique solution under realistic assumptions on the initial data, and we present an efficient linear implicit numerical scheme in detail. Our method creates flow fields with 100% density over the entire image domain, it is robust under a large range of parameter variations, and it can recover displacement fields that are far beyond the typical one-pixel limits which are characteristic for many differential methods for determining optical flow. We show that it performs better than the classic optical flow methods with 100% density that are evaluated by Barron et al. (1994). Our software is available from the Internet.
Resumo:
We analyse the influence of colour information in optical flow methods. Typically, most of these techniques compute their solutions using grayscale intensities due to its simplicity and faster processing, ignoring the colour features. However, the current processing systems have minimized their computational cost and, on the other hand, it is reasonable to assume that a colour image offers more details from the scene which should facilitate finding better flow fields. The aim of this work is to determine if a multi-channel approach supposes a quite enough improvement to justify its use. In order to address this evaluation, we use a multi-channel implementation of a well-known TV-L1 method. Furthermore, we review the state-of-the-art in colour optical flow methods. In the experiments, we study various solutions using grayscale and RGB images from recent evaluation datasets to verify the colour benefits in motion estimation.
Resumo:
We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Magnetic Resonance Spectroscopy (MRS) is an advanced clinical and research application which guarantees a specific biochemical and metabolic characterization of tissues by the detection and quantification of key metabolites for diagnosis and disease staging. The "Associazione Italiana di Fisica Medica (AIFM)" has promoted the activity of the "Interconfronto di spettroscopia in RM" working group. The purpose of the study is to compare and analyze results obtained by perfoming MRS on scanners of different manufacturing in order to compile a robust protocol for spectroscopic examinations in clinical routines. This thesis takes part into this project by using the GE Signa HDxt 1.5 T at the Pavillion no. 11 of the S.Orsola-Malpighi hospital in Bologna. The spectral analyses have been performed with the jMRUI package, which includes a wide range of preprocessing and quantification algorithms for signal analysis in the time domain. After the quality assurance on the scanner with standard and innovative methods, both spectra with and without suppression of the water peak have been acquired on the GE test phantom. The comparison of the ratios of the metabolite amplitudes over Creatine computed by the workstation software, which works on the frequencies, and jMRUI shows good agreement, suggesting that quantifications in both domains may lead to consistent results. The characterization of an in-house phantom provided by the working group has achieved its goal of assessing the solution content and the metabolite concentrations with good accuracy. The goodness of the experimental procedure and data analysis has been demonstrated by the correct estimation of the T2 of water, the observed biexponential relaxation curve of Creatine and the correct TE value at which the modulation by J coupling causes the Lactate doublet to be inverted in the spectrum. The work of this thesis has demonstrated that it is possible to perform measurements and establish protocols for data analysis, based on the physical principles of NMR, which are able to provide robust values for the spectral parameters of clinical use.