44 resultados para Weibull distribution function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Land surface albedo, a key parameter to derive Earth's surface energy balance, is used in the parameterization of numerical weather prediction, climate monitoring and climate change impact assessments. Changes in albedo due to fire have not been fully investigated on a continental and global scale. The main goal of this study, therefore, is to quantify the changes in instantaneous shortwave albedo produced by biomass burning activities and their associated radiative forcing. The study relies on the MODerate-resolution Imaging Spectroradiometer (MODIS) MCD64A1 burned-area product to create an annual composite of areas affected by fire and the MCD43C2 bidirectional reflectance distribution function (BRDF) albedo snow-free product to compute a bihemispherical reflectance time series. The approximate day of burning is used to calculate the instantaneous change in shortwave albedo. Using the corresponding National Centers for Environmental Prediction (NCEP) monthly mean downward solar radiation flux at the surface, the global radiative forcing associated with fire was computed. The analysis reveals a mean decrease in shortwave albedo of −0.014 (1σ = 0.017), causing a mean positive radiative forcing of 3.99 Wm−2 (1σ = 4.89) over the 2002–20012 time period in areas affected by fire. The greatest drop in mean shortwave albedo change occurs in 2002, which corresponds to the highest total area burned (378 Mha) observed in the same year and produces the highest mean radiative forcing (4.5 Wm−2). Africa is the main contributor in terms of burned area, but forests globally give the highest radiative forcing per unit area and thus give detectable changes in shortwave albedo. The global mean radiative forcing for the whole period studied (~0.0275 Wm−2) shows that the contribution of fires to the Earth system is not insignificant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The England and Wales precipitation (EWP) dataset is a homogeneous time series of daily accumulations from 1931 to 2014, composed from rain gauge observations spanning the region. The daily regional-average precipitation statistics are shown to be well described by a Weibull distribution, which is used to define extremes in terms of percentiles. Computed trends in annual and seasonal precipitation are sensitive to the period chosen, due to large variability on interannual and decadal timescales. Atmospheric circulation patterns associated with seasonal precipitation variability are identified. These patterns project onto known leading modes of variability, all of which involve displacements of the jet stream and storm-track over the eastern Atlantic. The intensity of daily precipitation for each calendar season is investigated by partitioning all observations into eight intensity categories contributing equally to the total precipitation in the dataset. Contrary to previous results based on shorter periods, no significant trends of the most intense categories are found between 1931 and 2014. The regional-average precipitation is found to share statistical properties common to the majority of individual stations across England and Wales used in previous studies. Statistics of the EWP data are examined for multi-day accumulations up to 10 days, which are more relevant for river flooding. Four recent years (2000, 2007, 2008 and 2012) have a greater number of extreme events in the 3-and 5-day accumulations than any previous year in the record. It is the duration of precipitation events in these years that is remarkable, rather than the magnitude of the daily accumulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sulphate-reducing bacteria (SRB) and methanogenic archaea (MA) are important anaerobic terminal oxidisers of organic matter. However, we have little knowledge about the distribution and types of SRB and MA in the environment or the functional role they play in situ. Here we have utilised sediment slurry microcosms amended with ecologically significant substrates, including acetate and hydrogen, and specific functional inhibitors, to identify the important SRB and MA groups in two contrasting sites on a UK estuary. Substrate and inhibitor additions had significant effects on methane production and on acetate and sulphate consumption in the slurries. By using specific 16S-targeted oligonucleotide probes we were able to link specific SRB and MA groups to the use of the added substrates. Acetate consumption in the freshwater-dominated sediments was mediated by Methanosarcinales under low-sulphate conditions and Desulfobacter under the high-sulphate conditions that simulated a tidal incursion. In the marine-dominated sediments, acetate consumption was linked to Desulfobacter. Addition of trimethylamine, a non-competitive substrate for methanogenesis, led to a large increase in Methanosarcinales signal in marine slurries. Desulfobulbus was linked to non-sulphate-dependent H-2 consumption in the freshwater sediments. The addition of sulphate to freshwater sediments inhibited methane production and reduced signal from probes targeted to Methanosarcinales and Methanomicrobiales, while the addition of molybdate to marine sediments inhibited Desulfobulbus and Desulfobacterium. These data complement our understanding of the ecophysiology of the organisms detected and make a firm connection between the capabilities of species, as observed in the laboratory, to their roles in the environment. (C) 2003 Federation of European Microbiological Societies. Published by Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas and performing a monochromatic radiation calculation for each point. In this presentation it is shown that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K/day due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such that they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide, and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K/day can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K/day for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems. In this setting, recent works have shown how to get a statistics of extremes in agreement with the classical Extreme Value Theory. We pursue these investigations by giving analytical expressions of Extreme Value distribution parameters for maps that have an absolutely continuous invariant measure. We compare these analytical results with numerical experiments in which we study the convergence to limiting distributions using the so called block-maxima approach, pointing out in which cases we obtain robust estimation of parameters. In regular maps for which mixing properties do not hold, we show that the fitting procedure to the classical Extreme Value Distribution fails, as expected. However, we obtain an empirical distribution that can be explained starting from a different observable function for which Nicolis et al. (Phys. Rev. Lett. 97(21): 210602, 2006) have found analytical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems that have a singular measure. Using the block maxima approach described in Faranda et al. [2011] we show that, numerically, the Extreme Value distribution for these maps can be associated to the Generalised Extreme Value family where the parameters scale with the information dimension. The numerical analysis are performed on a few low dimensional maps. For the middle third Cantor set and the Sierpinskij triangle obtained using Iterated Function Systems, experimental parameters show a very good agreement with the theoretical values. For strange attractors like Lozi and H\`enon maps a slower convergence to the Generalised Extreme Value distribution is observed. Even in presence of large statistics the observed convergence is slower if compared with the maps which have an absolute continuous invariant measure. Nevertheless and within the uncertainty computed range, the results are in good agreement with the theoretical estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new methodology that allows the construction of wave frequency distributions due to growing incoherent whistler-mode waves in the magnetosphere. The technique combines the equations of geometric optics (i.e. raytracing) with the equation of transfer of radiation in an anisotropic lossy medium to obtain spectral energy density as a function of frequency and wavenormal angle. We describe the method in detail, and then demonstrate how it could be used in an idealised magnetosphere during quiet geomagnetic conditions. For a specific set of plasma conditions, we predict that the wave power peaks off the equator at ~15 degrees magnetic latitude. The new calculations predict that wave power as a function of frequency can be adequately described using a Gaussian function, but as a function of wavenormal angle, it more closely resembles a skew normal distribution. The technique described in this paper is the first known estimate of the parallel and oblique incoherent wave spectrum as a result of growing whistler-mode waves, and provides a means to incorporate self-consistent wave-particle interactions in a kinetic model of the magnetosphere over a large volume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A primitive equation model is used to study the sensitivity of baroclinic wave life cycles to the initial latitude-height distribution of humidity. Diabatic heating is parametrized only as a consequence of condensation in regions of large-scale ascent. Experiments are performed in which the initial relative humidity is a simple function of model level, and in some cases latitude bands are specified which are initially relatively dry. It is found that the presence of moisture can either increase or decrease the peak eddy kinetic energy of the developing wave, depending on the initial moisture distribution. A relative abundance of moisture at mid-latitudes tends to weaken the wave, while a relative abundance at low latitudes tends to strengthen it. This sensitivity exists because competing processes are at work. These processes are described in terms of energy box diagnostics. The most realistic case lies on the cusp of this sensitivity. Further physical parametrizations are then added, including surface fluxes and upright moist convection. These have the effect of increasing wave amplitude, but the sensitivity to initial conditions of relative humidity remains. Finally, 'control' and 'doubled CO2' life cycles are performed, with initial conditions taken from the time-mean zonal-mean output of equilibrium GCM experiments. The attenuation of the wave resulting from reduced baroclinicity is more pronounced than any effect due to changes in initial moisture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

tWe develop an orthogonal forward selection (OFS) approach to construct radial basis function (RBF)network classifiers for two-class problems. Our approach integrates several concepts in probabilisticmodelling, including cross validation, mutual information and Bayesian hyperparameter fitting. At eachstage of the OFS procedure, one model term is selected by maximising the leave-one-out mutual infor-mation (LOOMI) between the classifier’s predicted class labels and the true class labels. We derive theformula of LOOMI within the OFS framework so that the LOOMI can be evaluated efficiently for modelterm selection. Furthermore, a Bayesian procedure of hyperparameter fitting is also integrated into theeach stage of the OFS to infer the l2-norm based local regularisation parameter from the data. Since eachforward stage is effectively fitting of a one-variable model, this task is very fast. The classifier construc-tion procedure is automatically terminated without the need of using additional stopping criterion toyield very sparse RBF classifiers with excellent classification generalisation performance, which is par-ticular useful for the noisy data sets with highly overlapping class distribution. A number of benchmarkexamples are employed to demonstrate the effectiveness of our proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Filamin A (FlnA) cross-links actin filaments and connects the Von Willebrand factor receptor GPIb-IX-V to the underlying cytoskeleton in platelets. Because FlnA deficiency is embryonic lethal, mice lacking FlnA in platelets were generated by breeding FlnA(loxP/loxP) females with GATA1-Cre males. FlnA(loxP/y) GATA1-Cre males have a macrothrombocytopenia and increased tail bleeding times. FlnA-null platelets have decreased expression and altered surface distribution of GPIbalpha because they lack the normal cytoskeletal linkage of GPIbalpha to underlying actin filaments. This results in approximately 70% less platelet coverage on collagen-coated surfaces at shear rates of 1,500/s, compared with wild-type platelets. Unexpectedly, however, immunoreceptor tyrosine-based activation motif (ITAM)- and ITAM-like-mediated signals are severely compromised in FlnA-null platelets. FlnA-null platelets fail to spread and have decreased alpha-granule secretion, integrin alphaIIbbeta3 activation, and protein tyrosine phosphorylation, particularly that of the protein tyrosine kinase Syk and phospholipase C-gamma2, in response to stimulation through the collagen receptor GPVI and the C-type lectin-like receptor 2. This signaling defect was traced to the loss of a novel FlnA-Syk interaction, as Syk binds to FlnA at immunoglobulin-like repeat 5. Our findings reveal that the interaction between FlnA and Syk regulates ITAM- and ITAM-like-containing receptor signaling and platelet function.