917 resultados para Probability distribution functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to dimension the economic risks and returns on adopters of genetically modified (GM) maize in one of the major corn producing regions of São Paulo state. We performed analysis of variation of the quantities and prices of insecticides used, productivity gains, and variation in the price differentials between GM maize and conventional hybrids seeds, according to account to the maize prices oscillation during the period studied. The net benefits methodology was used, in other words, the economic gains minus the costs of GM technology under risk conditions were calculated. The net benefits was calculated as a function of four critical variables: 1) GM maize productivity; 2) costs of pest control; 3) maize price; 4) GM seeds cost. The probability distribution functions of these critical variables were estimated and included in the net benefit equation. Using the Monte Carlo simulation methodology, the following indicator sets were estimated: central tendency measurements, variability in net benefits (total benefits minus total costs), sensitivity analysis of the net benefits in relation to the critical variables, and finally, a map of the risk to GM technology adopters. These indicators allow one to design economic scenarios associated with their probability of occurring. The results showed probability of 85% to positive gains to the farmers who adopted the transgenic maize seed cultivation. The variable with the greatest impact on the farmers' income was the reduction in productivity loss, that means, as higher is the maize productivity, higher will be the net income. The average gain was US$ 137,41 (R$ 2.45/US$)per hectare with the adoption of transgenic maize seed when compared to conventional maize seed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three ice type regimes at Ice Station Belgica (ISB), during the 2007 International Polar Year SIMBA (Sea Ice Mass Balance in Antarctica) expedition, were characterized and assessed for elevation, snow depth, ice freeboard and thickness. Analyses of the probability distribution functions showed great potential for satellite-based altimetry for estimating ice thickness. In question is the required altimeter sampling density for reasonably accurate estimation of snow surface elevation given inherent spatial averaging. This study assesses an effort to determine the number of laser altimeter 'hits' of the ISB floe, as a representative Antarctic floe of mixed first- and multi-year ice types, for the purpose of statistically recreating the in situ-determined ice-thickness and snow depth distribution based on the fractional coverage of each ice type. Estimates of the fractional coverage and spatial distribution of the ice types, referred to as ice 'towns', for the 5 km**2 floe were assessed by in situ mapping and photo-visual documentation. Simulated ICESat altimeter tracks, with spot size ~70 m and spacing ~170 m, sampled the floe's towns, generating a buoyancy-derived ice thickness distribution. 115 altimeter hits were required to statistically recreate the regional thickness mean and distribution for a three-town assemblage of mixed first- and multi-year ice, and 85 hits for a two-town assemblage of first-year ice only: equivalent to 19.5 and 14.5 km respectively of continuous altimeter track over a floe region of similar structure. Results have significant implications toward model development of sea-ice sampling performance of the ICESat laser altimeter record as well as maximizing sampling characteristics of satellite/airborne laser and radar altimetry missions for sea-ice thickness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This paper describes a two-part methodology for managing the risk posed by water supply variability to irrigated agriculture. First, an econometric model is used to explain the variation in the production value of irrigated agriculture. The explanatory variables include an index of irrigation water availability (surface storage levels), a price index representative of the crops grown in each geographical unit, and a time variable. The model corrects for autocorrelation and it is applied to 16 representative Spanish provinces in terms of irrigated agriculture. In the second part, the fitted models are used for the economic evaluation of drought risk. In flow variability in the hydrological system servicing each province is used to perform ex-ante evaluations of economic output for the upcoming irrigation season. The model?s error and the probability distribution functions (PDFs) of the reservoirs? storage variations are used to generate Monte Carlo (Latin Hypercube) simulations of agricultural output 7 and 3 months prior to the irrigation season. The results of these simulations illustrate the different risk profiles of each management unit, which depend on farm productivity and on the probability distribution function of water in flow to reservoirs. The potential for ex-ante drought impact assessments is demonstrated. By complementing hydrological models, this method can assist water managers and decisionmakers in managing reservoirs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducing cover crops (CC) interspersed with intensively fertilized crops in rotation has the potential to reduce nitrate leaching. This paper evaluates various strategies involving CC between maize and compares the economic and environmental results with respect to a typical maize?fallow rotation. The comparison is performed through stochastic (Monte-Carlo) simulation models of farms? profits using probability distribution functions (pdfs) of yield and N fertilizer saving fitted with data collected from various field trials and pdfs of crop prices and the cost of fertilizer fitted from statistical sources. Stochastic dominance relationships are obtained to rank the most profitable strategies from a farm financial perspective. A two-criterion comparison scheme is proposed to rank alternative strategies based on farm profit and nitrate leaching levels, taking the baseline scenario as the maize?fallow rotation. The results show that when CC biomass is sold as forage instead of keeping it in the soil, greater profit and less leaching of nitrates are achieved than in the baseline scenario. While the fertilizer saving will be lower if CC is sold than if it is kept in the soil, the revenue obtained from the sale of the CC compensates for the reduced fertilizer savings. The results show that CC would perhaps provide a double dividend of greater profit and reduced nitrate leaching in intensive irrigated cropping systems in Mediterranean regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Services in smart environments pursue to increase the quality of people?s lives. The most important issues when developing this kind of environments is testing and validating such services. These tasks usually imply high costs and annoying or unfeasible real-world testing. In such cases, artificial societies may be used to simulate the smart environment (i.e. physical environment, equipment and humans). With this aim, the CHROMUBE methodology guides test engineers when modeling human beings. Such models reproduce behaviors which are highly similar to the real ones. Originally, these models are based on automata whose transitions are governed by random variables. Automaton?s structure and the probability distribution functions of each random variable are determined by a manual test and error process. In this paper, it is presented an alternative extension of this methodology which avoids the said manual process. It is based on learning human behavior patterns automatically from sensor data by using machine learning techniques. The presented approach has been tested on a real scenario, where this extension has given highly accurate human behavior models,

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a massive equilibrium simulation of the three-dimensional Ising spin glass at low temperatures. The Janus special-purpose computer has allowed us to equilibrate, using parallel tempering, L = 32 lattices down to T ≈ 0.64Tc. We demonstrate the relevance of equilibrium finite-size simulations to understand experimental non-equilibrium spin glasses in the thermodynamical limit by establishing a time-length dictionary. We conclude that non-equilibrium experiments performed on a time scale of one hour can be matched with equilibrium results on L ≈ 110 lattices. A detailed investigation of the probability distribution functions of the spin and link overlap, as well as of their correlation functions, shows that Replica Symmetry Breaking is the appropriate theoretical framework for the physically relevant length scales. Besides, we improve over existing methodologies to ensure equilibration in parallel tempering simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We combine multi-wavelength data in the AEGIS-XD and C-COSMOS surveys to measure the typical dark matter halo mass of X-ray selected active galactic nuclei (AGN) [L_X(2–10 keV) > 10^42 erg s^− 1] in comparison with far-infrared selected star-forming galaxies detected in the Herschel/PEP survey (PACS Evolutionary Probe; L_IR > 10^11 L_⊙) and quiescent systems at z ≈ 1. We develop a novel method to measure the clustering of extragalactic populations that uses photometric redshift probability distribution functions in addition to any spectroscopy. This is advantageous in that all sources in the sample are used in the clustering analysis, not just the subset with secure spectroscopy. The method works best for large samples. The loss of accuracy because of the lack of spectroscopy is balanced by increasing the number of sources used to measure the clustering. We find that X-ray AGN, far-infrared selected star-forming galaxies and passive systems in the redshift interval 0.6 < z < 1.4 are found in haloes of similar mass, log M_DMH/(M_⊙ h^−1) ≈ 13.0. We argue that this is because the galaxies in all three samples (AGN, star-forming, passive) have similar stellar mass distributions, approximated by the J-band luminosity. Therefore, all galaxies that can potentially host X-ray AGN, because they have stellar masses in the appropriate range, live in dark matter haloes of log M_DMH/(M_⊙ h^−1) ≈ 13.0 independent of their star formation rates. This suggests that the stellar mass of X-ray AGN hosts is driving the observed clustering properties of this population. We also speculate that trends between AGN properties (e.g. luminosity, level of obscuration) and large-scale environment may be related to differences in the stellar mass of the host galaxies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By introducing the scattering probability of a subsurface defect (SSD) and statistical distribution functions of SSD radius, refractive index, and position, we derive an extended bidirectional reflectance distribution function (BRDF) from the Jones scattering matrix. This function is applicable to the calculation for comparison with measurement of polarized light-scattering resulting from a SSD. A numerical calculation of the extended BRDF for the case of p-polarized incident light was performed by means of the Monte Carlo method. Our numerical results indicate that the extended BRDF strongly depends on the light incidence angle, the light scattering angle, and the out-of-plane azimuth angle. We observe a 180 degrees symmetry with respect to the azimuth angle. We further investigate the influence of the SSD density, the substrate refractive index, and the statistical distributions of the SSD radius and refractive index on the extended BRDF. For transparent substrates, we also find the dependence of the extended BRDF on the SSD positions. (c) 2006 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.