992 resultados para Efficient estimation
Resumo:
This paper presents a new methodology for the adjustment of fuzzy inference systems, which uses technique based on error back-propagation method. The free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules, are automatically adjusted. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through estimation of time series and by a mathematical modeling problem. More specifically, the Mackey-Glass chaotic time series is used for the validation of the proposed methodology. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
Traditional methods of submerged aquatic vegetation (SAV) survey last long and then, they are high cost. Optical remote sensing is an alternative, but it has some limitations in the aquatic environment. The use of echosounder techniques is efficient to detect submerged targets. Therefore, the aim of this study is to evaluate different kinds of interpolation approach applied on SAV sample data collected by echosounder. This study case was performed in a region of Uberaba River - Brazil. The interpolation methods evaluated in this work follow: Nearest Neighbor, Weighted Average, Triangular Irregular Network (TIN) and ordinary kriging. Better results were carried out with kriging interpolation. Thus, it is recommend the use of geostatistics for spatial inference of SAV from sample data surveyed with echosounder techniques. © 2012 IEEE.
Resumo:
One approach to verify the adequacy of estimation methods of reference evapotranspiration is the comparison with the Penman-Monteith method, recommended by the United Nations of Food and Agriculture Organization - FAO, as the standard method for estimating ET0. This study aimed to compare methods for estimating ET0, Makkink (MK), Hargreaves (HG) and Solar Radiation (RS), with Penman-Monteith (PM). For this purpose, we used daily data of global solar radiation, air temperature, relative humidity and wind speed for the year 2010, obtained through the automatic meteorological station, with latitude 18° 91' 66 S, longitude 48° 25' 05 W and altitude of 869m, at the National Institute of Meteorology situated in the Campus of Federal University of Uberlandia - MG, Brazil. Analysis of results for the period were carried out in daily basis, using regression analysis and considering the linear model y = ax, where the dependent variable was the method of Penman-Monteith and the independent, the estimation of ET0 by evaluated methods. Methodology was used to check the influence of standard deviation of daily ET0 in comparison of methods. The evaluation indicated that methods of Solar Radiation and Penman-Monteith cannot be compared, yet the method of Hargreaves indicates the most efficient adjustment to estimate ETo.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Despite their generality, conventional Volterra filters are inadequate for some applications, due to the huge number of parameters that may be needed for accurate modelling. When a state-space model of the target system is known, this can be assessed by computing its kernels, which also provides valuable information for choosing an adequate alternate Volterra filter structure, if necessary, and is useful for validating parameter estimation procedures. In this letter, we derive expressions for the kernels by using the Carleman bilinearization method, for which an efficient algorithm is given. Simulation results are presented, which confirm the usefulness of the proposed approach.
Resumo:
[EN] This article describes an implementation of the optical flow estimation method introduced by Zach, Pock and Bischof. This method is based on the minimization of a functional containing a data term using the L norm and a regularization term using the total variation of the flow. The main feature of this formulation is that it allows discontinuities in the flow field, while being more robust to noise than the classical approach. The algorithm is an efficient numerical scheme, which solves a relaxed version of the problem by alternate minimization.
Resumo:
[EN] In this paper we show that a classic optical flow technique by Nagel and Enkelmann can be regarded as an early anisotropic diffusion method with a diffusion tensor. We introduce three improvements into the model formulation that avoid inconsistencies caused by centering the brightness term and the smoothness term in different images use a linear scale-space focusing strategy from coarse to fine scales for avoiding convergence to physically irrelevant local minima, and create an energy functional that is invariant under linear brightness changes. Applying a gradient descent method to the resulting energy functional leads to a system of diffusion-reaction equations. We prove that this system has a unique solution under realistic assumptions on the initial data, and we present an efficient linear implicit numerical scheme in detail. Our method creates flow fields with 100% density over the entire image domain, it is robust under a large range of parameter variations, and it can recover displacement fields that are far beyond the typical one-pixel limits which are characteristic for many differential methods for determining optical flow. We show that it performs better than the classic optical flow methods with 100% density that are evaluated by Barron et al. (1994). Our software is available from the Internet.
Resumo:
In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.
Resumo:
In recent years, thanks to the technological advances, electromagnetic methods for non-invasive shallow subsurface characterization have been increasingly used in many areas of environmental and geoscience applications. Among all the geophysical electromagnetic methods, the Ground Penetrating Radar (GPR) has received unprecedented attention over the last few decades due to its capability to obtain, spatially and temporally, high-resolution electromagnetic parameter information thanks to its versatility, its handling, its non-invasive nature, its high resolving power, and its fast implementation. The main focus of this thesis is to perform a dielectric site characterization in an efficient and accurate way studying in-depth a physical phenomenon behind a recent developed GPR approach, the so-called early-time technique, which infers the electrical properties of the soil in the proximity of the antennas. In particular, the early-time approach is based on the amplitude analysis of the early-time portion of the GPR waveform using a fixed-offset ground-coupled antenna configuration where the separation between the transmitting and receiving antenna is on the order of the dominant pulse-wavelength. Amplitude information can be extracted from the early-time signal through complex trace analysis, computing the instantaneous-amplitude attributes over a selected time-duration of the early-time signal. Basically, if the acquired GPR signals are considered to represent the real part of a complex trace, and the imaginary part is the quadrature component obtained by applying a Hilbert transform to the GPR trace, the amplitude envelope is the absolute value of the resulting complex trace (also known as the instantaneous-amplitude). Analysing laboratory information, numerical simulations and natural field conditions, and summarising the overall results embodied in this thesis, it is possible to suggest the early-time GPR technique as an effective method to estimate physical properties of the soil in a fast and non-invasive way.
Resumo:
In biostatistical applications interest often focuses on the estimation of the distribution of a time-until-event variable T. If one observes whether or not T exceeds an observed monitoring time at a random number of monitoring times, then the data structure is called interval censored data. We extend this data structure by allowing the presence of a possibly time-dependent covariate process that is observed until end of follow up. If one only assumes that the censoring mechanism satisfies coarsening at random, then, by the curve of dimensionality, typically no regular estimators will exist. To fight the curse of dimensionality we follow the approach of Robins and Rotnitzky (1992) by modeling parameters of the censoring mechanism. We model the right-censoring mechanism by modeling the hazard of the follow up time, conditional on T and the covariate process. For the monitoring mechanism we avoid modeling the joint distribution of the monitoring times by only modeling a univariate hazard of the pooled monitoring times, conditional on the follow up time, T, and the covariates process, which can be estimated by treating the pooled sample of monitoring times as i.i.d. In particular, it is assumed that the monitoring times and the right-censoring times only depend on T through the observed covariate process. We introduce inverse probability of censoring weighted (IPCW) estimator of the distribution of T and of smooth functionals thereof which are guaranteed to be consistent and asymptotically normal if we have available correctly specified semiparametric models for the two hazards of the censoring process. Furthermore, given such correctly specified models for these hazards of the censoring process, we propose a one-step estimator which will improve on the IPCW estimator if we correctly specify a lower-dimensional working model for the conditional distribution of T, given the covariate process, that remains consistent and asymptotically normal if this latter working model is misspecified. It is shown that the one-step estimator is efficient if each subject is at most monitored once and the working model contains the truth. In general, it is shown that the one-step estimator optimally uses the surrogate information if the working model contains the truth. It is not optimal in using the interval information provided by the current status indicators at the monitoring times, but simulations in Peterson, van der Laan (1997) show that the efficiency loss is small.
Resumo:
This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.
Resumo:
In recent years, researchers in the health and social sciences have become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of an exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Natural direct and indirect effects are of particular interest as they generally combine to produce the total effect of the exposure and therefore provide insight on the mechanism by which it operates to produce the outcome. A semiparametric theory has recently been proposed to make inferences about marginal mean natural direct and indirect effects in observational studies (Tchetgen Tchetgen and Shpitser, 2011), which delivers multiply robust locally efficient estimators of the marginal direct and indirect effects, and thus generalizes previous results for total effects to the mediation setting. In this paper we extend the new theory to handle a setting in which a parametric model for the natural direct (indirect) effect within levels of pre-exposure variables is specified and the model for the observed data likelihood is otherwise unrestricted. We show that estimation is generally not feasible in this model because of the curse of dimensionality associated with the required estimation of auxiliary conditional densities or expectations, given high-dimensional covariates. We thus consider multiply robust estimation and propose a more general model which assumes a subset but not all of several working models holds.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
This paper considers statistical models in which two different types of events, such as the diagnosis of a disease and the remission of the disease, occur alternately over time and are observed subject to right censoring. We propose nonparametric estimators for the joint distribution of bivariate recurrence times and the marginal distribution of the first recurrence time. In general, the marginal distribution of the second recurrence time cannot be estimated due to an identifiability problem, but a conditional distribution of the second recurrence time can be estimated non-parametrically. In literature, statistical methods have been developed to estimate the joint distribution of bivariate recurrence times based on data of the first pair of censored bivariate recurrence times. These methods are efficient in the current model because recurrence times of higher orders are not used. Asymptotic properties of the estimators are established. Numerical studies demonstrate the estimator performs well with practical sample sizes. We apply the proposed method to a Denmark psychiatric case register data set for illustration of the methods and theory.