992 resultados para estimating function
Resumo:
Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.
Resumo:
Atmospheric aerosols are now actively studied, in particular because of their radiative and climate impacts. Estimations of the direct aerosol radiative perturbation, caused by extinction of incident solar radiation, usually rely on radiative transfer codes and involve simplifying hypotheses. This paper addresses two approximations which are widely used for the sake of simplicity and limiting the computational cost of the calculations. Firstly, it is shown that using a Lambertian albedo instead of the more rigorous bidirectional reflectance distribution function (BRDF) to model the ocean surface radiative properties leads to large relative errors in the instantaneous aerosol radiative perturbation. When averaging over the day, these errors cancel out to acceptable levels of less than 3% (except in the northern hemisphere winter). The other scope of this study is to address aerosol non-sphericity effects. Comparing an experimental phase function with an equivalent Mie-calculated phase function, we found acceptable relative errors if the aerosol radiative perturbation calculated for a given optical thickness is daily averaged. However, retrieval of the optical thickness of non-spherical aerosols assuming spherical particles can lead to significant errors. This is due to significant differences between the spherical and non-spherical phase functions. Discrepancies in aerosol radiative perturbation between the spherical and non-spherical cases are sometimes reduced and sometimes enhanced if the aerosol optical thickness for the spherical case is adjusted to fit the simulated radiance of the non-spherical case.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
In this paper, we consider the problem of estimating the number of times an air quality standard is exceeded in a given period of time. A non-homogeneous Poisson model is proposed to analyse this issue. The rate at which the Poisson events occur is given by a rate function lambda(t), t >= 0. This rate function also depends on some parameters that need to be estimated. Two forms of lambda(t), t >= 0 are considered. One of them is of the Weibull form and the other is of the exponentiated-Weibull form. The parameters estimation is made using a Bayesian formulation based on the Gibbs sampling algorithm. The assignation of the prior distributions for the parameters is made in two stages. In the first stage, non-informative prior distributions are considered. Using the information provided by the first stage, more informative prior distributions are used in the second one. The theoretical development is applied to data provided by the monitoring network of Mexico City. The rate function that best fit the data varies according to the region of the city and/or threshold that is considered. In some cases the best fit is the Weibull form and in other cases the best option is the exponentiated-Weibull. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Local influence diagnostics based on estimating equations as the role of a gradient vector derived from any fit function are developed for repeated measures regression analysis. Our proposal generalizes tools used in other studies (Cook, 1986: Cadigan and Farrell, 2002), considering herein local influence diagnostics for a statistical model where estimation involves an estimating equation in which all observations are not necessarily independent of each other. Moreover, the measures of local influence are illustrated with some simulated data sets to assess influential observations. Applications using real data are presented. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Surveys of commercial markets combined with molecular taxonomy (i.e. molecular monitoring) provide a means to detect products from illegal, unregulated and/or unreported (IUU) exploitation, including the sale of fisheries bycatch and wild meat (bushmeat). Capture-recapture analyses of market products using DNA profiling have the potential to estimate the total number of individuals entering the market. However, these analyses are not directly analogous to those of living individuals because a ‘market individual’ does not die suddenly but, instead, remains available for a time in decreasing quantities, rather like the exponential decay of a radioactive isotope. Here we use mitochondrial DNA (mtDNA) sequences and microsatellite genotypes to individually identify products from North Pacific minke whales (Balaenoptera acutorostrata ssp.) purchased in 12 surveys of markets in the Republic of (South) Korea from 1999 to 2003. By applying a novel capture-recapture model with a decay rate parameter to the 205 unique DNA profiles found among 289 products, we estimated that the total number of whales entering trade across the five-year survey period was 827 (SE, 164; CV, 0.20) and that the average ‘half-life’ of products from an individual whale on the market was 1.82 months (SE, 0.24; CV, 0.13). Our estimate of whales in trade (reflecting the true numbers killed) was significantly greater than the officially reported bycatch of 458 whales for this period. This unregulated exploitation has serious implications for the survival of this genetically distinct coastal population. Although our capture-recapture model was developed for specific application to the Korean whale-meat markets, the exponential decay function could be modified to improve the estimates of trade in other wildmeat or fisheries markets or abundance of living populations by noninvasive genotyping.
Resumo:
1. The crabeater seal Lobodon carcinophaga is considered to be a key species in the krill-based food web of the Southern Ocean. Reliable estimates of the abundance of this species are necessary to allow the development of multispecies, predator–prey models as a basis for management of the krill fishery in the Southern Ocean. 2. A survey of crabeater seal abundance was undertaken in 1500 000 km2 of pack-ice off east Antarctica between longitudes 64–150° E during the austral summer of 1999/2000. Sighting surveys, using double observer line transect methods, were conducted from an icebreaker and two helicopters to estimate the density of seals hauled out on the ice in survey strips. Satellite-linked dive recorders were deployed on a sample of seals to estimate the probability of seals being hauled out on the ice at the times of day when sighting surveys were conducted. Model-based inference, involving fitting a density surface, was used to infer densities in the entire survey region from estimates in the surveyed areas. 3. Crabeater seal abundance was estimated to be between 0.7 and 1.4 million animals (with 95% confidence), with the most likely estimate slightly less than 1 million. 4. Synthesis and applications. The estimation of crabeater seal abundance in Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR) management areas off east Antarctic where krill biomass has also been estimated recently provides the data necessary to begin extending from single-species to multispecies management of the krill fishery. Incorporation of all major sources of uncertainty allows a precautionary interpretation of crabeater abundance and demand for krill in keeping with CCAMLR’s precautionary approach to management. While this study focuses on the crabeater seal and management of living resources in the Southern Ocean, it has also led to technical and theoretical developments in survey methodology that have widespread potential application in ecological and resource management studies, and will contribute to a more fundamental understanding of the structure and function of the Southern Ocean ecosystem.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
In this paper, we focus on the model for two types of tumors. Tumor development can be described by four types of death rates and four tumor transition rates. We present a general semi-parametric model to estimate the tumor transition rates based on data from survival/sacrifice experiments. In the model, we make a proportional assumption of tumor transition rates on a common parametric function but no assumption of the death rates from any states. We derived the likelihood function of the data observed in such an experiment, and an EM algorithm that simplified estimating procedures. This article extends work on semi-parametric models for one type of tumor (see Portier and Dinse and Dinse) to two types of tumors.
Resumo:
Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.
Resumo:
Consider a nonparametric regression model Y=mu*(X) + e, where the explanatory variables X are endogenous and e satisfies the conditional moment restriction E[e|W]=0 w.p.1 for instrumental variables W. It is well known that in these models the structural parameter mu* is 'ill-posed' in the sense that the function mapping the data to mu* is not continuous. In this paper, we derive the efficiency bounds for estimating linear functionals E[p(X)mu*(X)] and int_{supp(X)}p(x)mu*(x)dx, where p is a known weight function and supp(X) the support of X, without assuming mu* to be well-posed or even identified.
Resumo:
A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^