992 resultados para Depth Estimation
Resumo:
We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI), with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR), the Goddard Profiling Algorithm (GPROF), and a multi-channel linear regression statistical method (MLRS). We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS) error against rain gauge data for 16 typhoon overpasses in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals outperform those retrieved from GPROF and MLRS. Overall, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Accurate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.
Resumo:
The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.
Resumo:
This study analyzes organic adoption decisions using a rich set of time-to-organic durations collected from avocado small-holders in Michoacán Mexico. We derive robust, intrasample predictions about the profiles of entry and exit within the conventional-versus-organic complex and we explore the sensitivity of these predictions to choice of functional form. The dynamic nature of the sample allows us to make retrospective predictions and we establish, precisely, the profile of organic entry had the respondents been availed optimal amounts of adoption-restraining resources. A fundamental problem in the dynamic adoption literature, hitherto unrecognized, is discussed and consequent extensions are suggested.
Resumo:
In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
Despite the fact that mites were used at the dawn of forensic entomology to elucidate the postmortem interval, their use in current cases remains quite low for procedural reasons such as inadequate taxonomic knowledge. A special interest is focused on the phoretic stages of some mite species, because the phoront-host specificity allows us to deduce in many occasions the presence of the carrier (usually Diptera or Coleoptera) although it has not been seen in the sampling performed in situ or in the autopsy room. In this article, we describe two cases where Poecilochirus austroasiaticus Vitzthum (Acari: Parasitidae) was sampled in the autopsy room. In the first case, we could sample the host, Thanatophilus ruficornis (Küster) (Coleoptera: Silphidae), which was still carrying phoretic stages of the mite on the body. That attachment allowed, by observing starvation/feeding periods as a function of the digestive tract filling, the establishment of chronological cycles of phoretic behavior, showing maximum peaks of phoronts during arrival and departure from the corpse and the lowest values in the phase of host feeding. From the sarcosaprophagous fauna, we were able to determine in this case a minimum postmortem interval of 10 days. In the second case, we found no Silphidae at the place where the corpse was found or at the autopsy, but a postmortem interval of 13 days could be established by the high specificity of this interspecific relationship and the departure from the corpse of this family of Coleoptera.
Resumo:
A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.
Resumo:
A rain shelter experiment was conducted in a 90-year-old Norway spruce stand, in the Kysucké Beskydy Mts (Slovakia). Three rain shelters were constructed in the stand to prevent the rainfall from reaching the soil and to reduce water availability in the rhizosphere. Fine root biomass and necromass were repeatedly measured throughout a growing season by soil coring. We established the quantities of fine root biomass (live) and necromass (dead) at soil depths of 0-5, 5-15, 15-25, and 25-35 cm. Significant differences in soil moisture contents between control and drought plots were found in the top 15 cm of soil after 20 weeks of rainfall manipulation (lasting from early June to late October). Our observations show that even relatively light drought decreased total fine root biomass from 272.0 to 242.8 g m-2 and increased the amount of necromass from 79.2 to 101.2 g m-2 in the top 35 cm of soil. Very fine roots, i.e. those with diameter up to 1 mm, were more affected than total fine roots defined as 0-2 mm. The effect of reduced water availability was depth-specific, as a result we observed a modification of vertical distribution of fine roots. More roots in drought treatment were produced in the wetter soil horizons at 25-35 cm depth than at the surface. We conclude that fine and very fine root systems of Norway spruce have the capacity to re-allocate resources to roots at different depths in response to environmental signals, resulting in changes in necromass to biomass ratio.
Resumo:
Ground-based aerosol optical depth (AOD) climatologies at three high-altitude sites in Switzerland (Jungfraujoch and Davos) and Southern Germany (Hohenpeissenberg) are updated and re-calibrated for the period 1995 – 2010. In addition, AOD time-series are augmented with previously unreported data, and are homogenized for the first time. Trend analysis revealed weak AOD trends (λ = 500 nm) at Jungfraujoch (JFJ; +0.007 decade-1), Davos (DAV; +0.002 decade-1) and Hohenpeissenberg (HPB; -0.011 decade-1) where the JFJ and HPB trends were statistically significant at the 95% and 90% confidence levels. However, a linear trend for the JFJ 1995 – 2005 period was found to be more appropriate than for 1995 – 2010 due to the influence of stratospheric AOD which gave a trend -0.003 decade-1 (significant at 95% level). When correcting for a recently available stratospheric AOD time-series, accounting for Pinatubo (1991) and more recent volcanic eruptions, the 1995 – 2010 AOD trends decreased slightly at DAV and HPB but remained weak at +0.000 decade-1 and -0.013 decade-1 (significant at 95% level). The JFJ 1995 – 2005 AOD time-series similarly decreased to -0.003 decade-1 (significant at 95% level). We conclude that despite a more detailed re40 analysis of these three time-series, which have been extended by five years to the end of 2010, a significant decrease in AOD at these three high-altitude sites has still not been observed.
Resumo:
This paper evaluates the relationship between the cloud modification factor (CMF) in the ultraviolet erythe- mal range and the cloud optical depth (COD) retrieved from the Aerosol Robotic Network (AERONET) "cloud mode" algorithm under overcast cloudy conditions (confirmed with sky images) at Granada, Spain, mainly for non-precipitating, overcast and relatively homogenous water clouds. Empirical CMF showed a clear exponential dependence on experimental COD values, decreasing approximately from 0.7 for COD=10 to 0.25 for COD=50. In addition, these COD measurements were used as input in the LibRadtran radia tive transfer code allowing the simulation of CMF values for the selected overcast cases. The modeled CMF exhibited a dependence on COD similar to the empirical CMF, but modeled values present a strong underestimation with respect to the empirical factors (mean bias of 22 %). To explain this high bias, an exhaustive comparison between modeled and experimental UV erythemal irradiance (UVER) data was performed. The comparison revealed that the radiative transfer simulations were 8 % higher than the observations for clear-sky conditions. The rest of the bias (~14 %) may be attributed to the substantial underestimation of modeled UVER with respect to experimental UVER under overcast conditions, although the correlation between both dataset was high (R2 ~ 0.93). A sensitive test showed that the main reason responsible for that underestimation is the experimental AERONET COD used as input in the simulations, which has been retrieved from zenith radiances in the visible range. In this sense, effective COD in the erythemal interval were derived from an iteration procedure based on searching the best match between modeled and experimental UVER values for each selected overcast case. These effective COD values were smaller than AERONET COD data in about 80 % of the overcast cases with a mean relative difference of 22 %.
Resumo:
Leaders across companies initiate and implement change and thus are crucial for successful organizations. This study takes a competency perspective on leaders and investigates the competencies leaders show to facilitate effective change. The article explores the content of the construct of leaders’ change competency and examines its antecedents and effects. We conducted a case study in a German tourism company undergoing a major change process. The study identified (a) distinct content facets regarding the construct of leaders’ change competency along its two dimensions of leaders’ readiness for change and leaders’ change ability; (b) the construct’s antecedents, specifically contextual factors, leaders’ competency potentials, and attitudes toward change; and (c) beneficial effects of leaders’ change competency. The study ends with implications for research and leadership practice as well as suggestions for future studies on leaders’ change competency.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.