897 resultados para Unbiased estimating functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric aerosols are now actively studied, in particular because of their radiative and climate impacts. Estimations of the direct aerosol radiative perturbation, caused by extinction of incident solar radiation, usually rely on radiative transfer codes and involve simplifying hypotheses. This paper addresses two approximations which are widely used for the sake of simplicity and limiting the computational cost of the calculations. Firstly, it is shown that using a Lambertian albedo instead of the more rigorous bidirectional reflectance distribution function (BRDF) to model the ocean surface radiative properties leads to large relative errors in the instantaneous aerosol radiative perturbation. When averaging over the day, these errors cancel out to acceptable levels of less than 3% (except in the northern hemisphere winter). The other scope of this study is to address aerosol non-sphericity effects. Comparing an experimental phase function with an equivalent Mie-calculated phase function, we found acceptable relative errors if the aerosol radiative perturbation calculated for a given optical thickness is daily averaged. However, retrieval of the optical thickness of non-spherical aerosols assuming spherical particles can lead to significant errors. This is due to significant differences between the spherical and non-spherical phase functions. Discrepancies in aerosol radiative perturbation between the spherical and non-spherical cases are sometimes reduced and sometimes enhanced if the aerosol optical thickness for the spherical case is adjusted to fit the simulated radiance of the non-spherical case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are no direct observational methods for determining the total rate at which energy is extracted from the solar wind by the magnetosphere. In the absence of such a direct measurement, alternative means of estimating the energy available to drive the magnetospheric system have been developed using different ionospheric and magnetospheric indices as proxies for energy consumption and dissipation and thus the input. The so-called coupling functions are constructed from the parameters of the interplanetary medium, as either theoretical or empirical estimates of energy transfer, and the effectiveness of these coupling functions has been evaluated in terms of their correlation with the chosen index. A number of coupling functions have been studied in the past with various criteria governing event selection and timescale. The present paper contains an exhaustive survey of the correlation between geomagnetic activity and the near-Earth solar wind and two of the planetary indices at a wide variety of timescales. Various combinations of interplanetary parameters are evaluated with careful allowance for the effects of data gaps in the interplanetary data. We show that the theoretical coupling, P�, function first proposed by Vasyliunas et al. is superior at all timescales from 1-day to 1-year.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the problem of estimating the number of times an air quality standard is exceeded in a given period of time. A non-homogeneous Poisson model is proposed to analyse this issue. The rate at which the Poisson events occur is given by a rate function lambda(t), t >= 0. This rate function also depends on some parameters that need to be estimated. Two forms of lambda(t), t >= 0 are considered. One of them is of the Weibull form and the other is of the exponentiated-Weibull form. The parameters estimation is made using a Bayesian formulation based on the Gibbs sampling algorithm. The assignation of the prior distributions for the parameters is made in two stages. In the first stage, non-informative prior distributions are considered. Using the information provided by the first stage, more informative prior distributions are used in the second one. The theoretical development is applied to data provided by the monitoring network of Mexico City. The rate function that best fit the data varies according to the region of the city and/or threshold that is considered. In some cases the best fit is the Weibull form and in other cases the best option is the exponentiated-Weibull. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of labor supply elasticities has been an important issue m the economic literature. Yet all works have estimated conditional mean labor supply functions only. The objective of this paper is to obtain more information on labor supply, by estimating the conditional quantile labor supply function. vI/e use a sample of prime age urban males employees in Brazil. Two stage estimators are used as the net wage and virtual income are found to be endogenous to the model. Contrary to previous works using conditional mean estimators, it is found that labor supply elasticities vary significantly and asymmetrically across hours of work. vVhile the income and wage elasticities at the standard work week are zero, for those working longer hours the elasticities are negative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology to estimate and identify different kinds of economic interaction, whenever these interactions can be established in the form of spatial dependence. First, we apply the semi-parametric approach of Chen and Conley (2001) to the estimation of reaction functions. Then, the methodology is applied to the analysis financial providers in Thailand. Based on a sample of financial institutions, we provide an economic framework to test if the actual spatial pattern is compatible with strategic competition (local interactions) or social planning (global interactions). Our estimates suggest that the provision of commercial banks and suppliers credit access is determined by spatial competition, while the Thai Bank of Agriculture and Agricultural Cooperatives is distributed as in a social planner problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil aggregation is an index of soil structure measured by mean weight diameter (MWD) or scaling factors often interpreted as fragmentation fractal dimensions (D-f). However, the MWD provides a biased estimate of soil aggregation due to spurious correlations among aggregate-size fractions and scale-dependency. The scale-invariant D-f is based on weak assumptions to allow particle counts and sensitive to the selection of the fractal domain, and may frequently exceed a value of 3, implying that D-f is a biased estimate of aggregation. Aggregation indices based on mass may be computed without bias using compositional analysis techniques. Our objective was to elaborate compositional indices of soil aggregation and to compare them to MWD and D-f using a published dataset describing the effect of 7 cropping systems on aggregation. Six aggregate-size fractions were arranged into a sequence of D-1 balances of building blocks that portray the process of soil aggregation. Isometric log-ratios (ilrs) are scale-invariant and orthogonal log contrasts or balances that possess the Euclidean geometry necessary to compute a distance between any two aggregation states, known as the Aitchison distance (A(x,y)). Close correlations (r>0.98) were observed between MWD, D-f, and the ilr when contrasting large and small aggregate sizes. Several unbiased embedded ilrs can characterize the heterogeneous nature of soil aggregates and be related to soil properties or functions. Soil bulk density and penetrater resistance were closely related to A(x,y) with reference to bare fallow. The A(x,y) is easy to implement as unbiased index of soil aggregation using standard sieving methods and may allow comparisons between studies. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mycobacterium bovis infects the wildlife species badgers Meles meles who are linked with the spread of the associated disease tuberculosis (TB) in cattle. Control of livestock infections depends in part on the spatial and social structure of the wildlife host. Here we describe spatial association of M. bovis infection in a badger population using data from the first year of the Four Area Project in Ireland. Using second-order intensity functions, we show there is strong evidence of clustering of TB cases in each the four areas, i.e. a global tendency for infected cases to occur near other infected cases. Using estimated intensity functions, we identify locations where particular strains of TB cluster. Generalized linear geostatistical models are used to assess the practical range at which spatial correlation occurs and is found to exceed 6 in all areas. The study is of relevance concerning the scale of localized badger culling in the control of the disease in cattle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective static analyses have been proposed which allow inferring functions which bound the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and such bounds have been shown useful in a number of applications, such as granularity control in parallel execution. On the other hand, in certain distributed computation scenarios where different platforms come into play, with each platform having different capabilities, it is more interesting to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution time. With this objective in mind, we propose a method which allows inferring upper and lower bounds on the execution times of procedures of a program in a given execution platform. The approach combines compile-time cost bounds analysis with a one-time profiling of the platform in order to determine the values of certain constants for that platform. These constants calibrate a cost model which from then on is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.