969 resultados para Probability Density Function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biodegradation is the chemical degradation of materials brought about by the action of naturally occurring microorganisms. Biodegradation is a relatively rapid process under suitable conditions of moisture, temperature and oxygen availability. The logic behind blending biopolymers such as starch with inert polymers like polyethylene is that if the biopolymer component is present in sufficient amount, and if it is removed by microorganisms in the waste disposal environment, then the base inert plastic should slowly degrade and disappear. The present work focuses on the preparation of biodegradable and photodegradable blends based on low density polyethylene incorporating small quantities of ionomers as compatibilizers. The thesis consists of eight chapters. The first chapter presents an introduction to the present research work and literature survey. The details of the materials used and the experimental procedures undertaken for the study are described in the second chapter. Preparation and characterization of low density polyethylene (LDPE)-biopolymer (starch/dextrin) blends are described in the third chapter. The result of investigations on the effect of polyethylene-co-methacrylic acid ionomers on the compatibility of LDPE and starch are reported in chapter 4. Chapter 5 has been divided into two parts. The first part deals with the effect of metal oxides on the photodegradation of LDPE. The second part describes the function of metal stearates on the photodegradation of LDPE. The results of the investigations on the role of various metal oxides as pro-oxidants on the degradation of ionomer compatibilized LDPE-starch blends are reported in chapter 6. Chapter 7 deals with the results of investigations on the role of various metal stearates as pro-oxidants on the degradation of ionomer compatibilized LDPE-starch blends. The conclusion of the investigations is presented in the last chapter of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Di Crescenzo and Longobardi (2002) introduced a measure of uncertainty in past lifetime distributions and studied its relationship with residual entropy function. In the present paper, we introduce a quantile version of the entropy function in past lifetime and study its properties. Unlike the measure of uncertainty given in Di Crescenzo and Longobardi (2002) the proposed measure uniquely determines the underlying probability distribution. The measure is used to study two nonparametric classes of distributions. We prove characterizations theorems for some well known quantile lifetime distributions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La present Tesi Doctoral, titulada desenvolupament computacional de la semblança molecular quàntica, tracta, fonamentalment, els aspectes de càlcul de mesures de semblança basades en la comparació de funcions de densitat electrònica.El primer capítol, Semblança quàntica, és introductori. S'hi descriuen les funcions de densitat de probabilitat electrònica i llur significança en el marc de la mecànica quàntica. Se n'expliciten els aspectes essencials i les condicions matemàtiques a satisfer, cara a una millor comprensió dels models de densitat electrònica que es proposen. Hom presenta les densitats electròniques, mencionant els teoremes de Hohenberg i Kohn i esquematitzant la teoria de Bader, com magnituds fonamentals en la descripció de les molècules i en la comprensió de llurs propietats.En el capítol Models de densitats electròniques moleculars es presenten procediments computacionals originals per l'ajust de funcions densitat a models expandits en termes de gaussianes 1s centrades en els nuclis. Les restriccions físico-matemàtiques associades a les distribucions de probabilitat s'introdueixen de manera rigorosa, en el procediment anomenat Atomic Shell Approximation (ASA). Aquest procediment, implementat en el programa ASAC, parteix d'un espai funcional quasi complert, d'on se seleccionen variacionalment les funcions o capes de l'expansió, d'acord als requisits de no negativitat. La qualitat d'aquestes densitats i de les mesures de semblança derivades es verifica abastament. Aquest model ASA s'estén a representacions dinàmiques, físicament més acurades, en quant que afectades per les vibracions nuclears, cara a una exploració de l'efecte de l'esmorteïment dels pics nuclears en les mesures de semblança molecular. La comparació de les densitats dinàmiques respecte les estàtiques evidencia un reordenament en les densitats dinàmiques, d'acord al que constituiria una manifestació del Principi quàntic de Le Chatelier. El procediment ASA, explícitament consistent amb les condicions de N-representabilitat, s'aplica també a la determinació directe de densitats electròniques hidrogenoides, en un context de teoria del funcional de la densitat.El capítol Maximització global de la funció de semblança presenta algorismes originals per la determinació de la màxima sobreposició de les densitats electròniques moleculars. Les mesures de semblança molecular quàntica s'identifiquen amb el màxim solapament, de manera es mesuri la distància entre les molècules, independentment dels sistemes de referència on es defineixen les densitats electròniques. Partint de la solució global en el límit de densitats infinitament compactades en els nuclis, es proposen tres nivells de aproximació per l'exploració sistemàtica, no estocàstica, de la funció de semblança, possibilitant la identificació eficient del màxim global, així com també dels diferents màxims locals. Es proposa també una parametrització original de les integrals de recobriment a través d'ajustos a funcions lorentzianes, en quant que tècnica d'acceleració computacional. En la pràctica de les relacions estructura-activitat, aquests avenços possibiliten la implementació eficient de mesures de semblança quantitatives, i, paral·lelament, proporcionen una metodologia totalment automàtica d'alineació molecular. El capítol Semblances d'àtoms en molècules descriu un algorisme de comparació dels àtoms de Bader, o regions tridimensionals delimitades per superfícies de flux zero de la funció de densitat electrònica. El caràcter quantitatiu d'aquestes semblances possibilita la mesura rigorosa de la noció química de transferibilitat d'àtoms i grups funcionals. Les superfícies de flux zero i els algorismes d'integració usats han estat publicats recentment i constitueixen l'aproximació més acurada pel càlcul de les propietats atòmiques. Finalment, en el capítol Semblances en estructures cristal·lines hom proposa una definició original de semblança, específica per la comparació dels conceptes de suavitat o softness en la distribució de fonons associats a l'estructura cristal·lina. Aquests conceptes apareixen en estudis de superconductivitat a causa de la influència de les interaccions electró-fonó en les temperatures de transició a l'estat superconductor. En aplicar-se aquesta metodologia a l'anàlisi de sals de BEDT-TTF, s'evidencien correlacions estructurals entre sals superconductores i no superconductores, en consonància amb les hipòtesis apuntades a la literatura sobre la rellevància de determinades interaccions.Conclouen aquesta tesi un apèndix que conté el programa ASAC, implementació de l'algorisme ASA, i un capítol final amb referències bibliogràfiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extent to which species are plastic in the timing of their reproductive events relative to phenology suggests how climate change might affect their demography. An ecological mismatch between the timing of hatch for avian species and the peak availability in quality and quantity of forage for rapidly growing offspring might ultimately affect recruitment to the breeding population unless individuals can adjust the timing of breeding to adapt to changing phenology. We evaluated effects of goose density, hatch timing relative to forage plant phenology, and weather indices on annual growth of pre-fledging Canada geese (Branta canadensis) from 1993-2010 at Akimiski Island, Nunavut. We found effects of both density and hatch timing relative to forage plant phenology; the earlier that eggs hatched relative to forage plant phenology, the larger the mean gosling size near fledging. Goslings were smallest in years when hatch was latest relative to forage plant phenology, and when local abundance of breeding adults was highest. We found no evidence for a trend in relative hatch timing, but it was apparent that in early springs, Canada geese tended to hatch later relative to vegetation phenology, suggesting that geese were not always able to adjust the timing of nesting as rapidly as vegetation phenology was advanced. Analyses using forage biomass information revealed a positive relationship between gosling size and per capita biomass availability, suggesting a causal mechanism for the density effect. The effects of weather parameters explained additional variation in mean annual gosling size, although total June and July rainfall had a small additive effect on gosling size. Modelling of annual first-year survival probability using mean annual gosling size as an annual covariate revealed a positive relationship, suggesting that reduced gosling growth negatively impacts recruitment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main biogeochemical nutrient distributions, along with ambient ocean temperature and the light field, control ocean biological productivity. Observations of nutrients are much sparser than physical observations of temperature and salinity, yet it is critical to validate biogeochemical models against these sparse observations if we are to successfully model biological variability and trends. Here we use data from the Bermuda Atlantic Time-series Study and the World Ocean Database 2005 to demonstrate quantitatively that over the entire globe a significant fraction of the temporal variability of phosphate, silicate and nitrate within the oceans is correlated with water density. The temporal variability of these nutrients as a function of depth is almost always greater than as a function of potential density, with he largest reductions in variability found within the main pycnocline. The greater nutrient variability as a function of depth occurs when dynamical processes vertically displace nutrient and density fields together on shorter timescales than biological adjustments. These results show that dynamical processes can have a significant impact on the instantaneous nutrient distributions. These processes must therefore be considered when modeling biogeochemical systems, when comparing such models with observations, or when assimilating data into such models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

LDL oxidation may be important in atherosclerosis. Extensive oxidation of LDL by copper induces increased uptake by macrophages, but results in decomposition of hydroperoxides, making it more difficult to investigate the effects of hydroperoxides in oxidised LDL on cell function. We describe here a simple method of oxidising LDL by dialysis against copper ions at 4 degrees C, which inhibits the decomposition of hydroperoxides, and allows the production of LDL rich in hydroperoxides (626 +/- 98 nmol/mg LDL protein) but low in oxysterols (3 +/- 1 nmol 7-ketocholesterol/mg LDL protein), whilst allowing sufficient modification (2.6 +/- 0.5 relative electrophoretic mobility) for rapid uptake by macrophages (5.49 +/- 0.75 mu g I-125-labelled hydroperoxide-rich LDL vs. 0.46 +/- 0.04 mu g protein/mg cell protein in 18 h for native LDL). By dialysing under the same conditions, but at 37 degrees C, the hydroperoxides are decomposed extensively and the LDL becomes rich in oxysterols. This novel method of oxidising LDL with high yield to either a hydroperoxide- or oxysterol-rich form by simply altering the temperature of dialysis may provide a useful tool for determining the effects of these different oxidation products on cell function. (C) 2007 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward constrained regression manner. The leave-one-out (LOO) test score is used for kernel selection. The jackknife parameter estimator subject to positivity constraint check is used for the parameter estimation of a single parameter at each forward step. As such the proposed approach is simple to implement and the associated computational cost is very low. An illustrative example is employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to that of the classical Parzen window estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the delay performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under ideal condition and in the presence of transmission errors. Relays are nodes capable of supporting high data rates for other low data rate nodes. In ideal channel ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF). This gain is still maintained in the presence of errors. It is also expected of relays to reduce the delay. However, the impact on the delay behavior of ErDCF under transmission errors is not known. In this work, we have presented the impact of transmission errors on delay. It turns out that under transmission errors of sufficient magnitude to increase dropped packets, packet delay is reduced. This is due to increase in the probability of failure. As a result the packet drop time increases, thus reflecting the throughput degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background FFAR1 receptor is a long chain fatty acid G-protein coupled receptor which is expressed widely, but found in high density in the pancreas and central nervous system. It has been suggested that FFAR1 may play a role in insulin sensitivity, lipotoxicity and is associated with type 2 diabetes. Here we investigate the effect of three common SNPs of FFAR1 (rs2301151; rs16970264; rs1573611) on pancreatic function, BMI, body composition and plasma lipids. Methodology/Principal Findings For this enquiry we used the baseline RISCK data, which provides a cohort of overweight subjects at increased cardiometabolic risk with detailed phenotyping. The key findings were SNPs of the FFAR1 gene region were associated with differences in body composition and lipids, and the effects of the 3 SNPs combined were cumulative on BMI, body composition and total cholesterol. The effects on BMI and body fat were predominantly mediated by rs1573611 (1.06 kg/m2 higher (P = 0.009) BMI and 1.53% higher (P = 0.002) body fat per C allele). Differences in plasma lipids were also associated with the BMI-increasing allele of rs2301151 including higher total cholesterol (0.2 mmol/L per G allele, P = 0.01) and with the variant A allele of rs16970264 associated with lower total (0.3 mmol/L, P = 0.02) and LDL (0.2 mmol/L, P<0.05) cholesterol, but also with lower HDL-cholesterol (0.09 mmol/L, P<0.05) although the difference was not apparent when controlling for multiple testing. There were no statistically significant effects of the three SNPs on insulin sensitivity or beta cell function. However accumulated risk allele showed a lower beta cell function on increasing plasma fatty acids with a carbon chain greater than six. Conclusions/Significance Differences in body composition and lipids associated with common SNPs in the FFAR1 gene were apparently not mediated by changes in insulin sensitivity or beta-cell function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low density lipoprotein (LDL) has recently been shown to be oxidised by iron within the lysosomes of macrophages and this is a novel potential mechanism for LDL oxidation in atherosclerosis. Our aim was to characterise the chemical and physical changes induced in LDL by iron at lysosomal pH and to investigate the effects of iron chelators and α-tocopherol on this process. LDL was oxidised by iron at pH 4.5 and 37°C and its oxidation monitored by spectrophotometry and HPLC. LDL was oxidised effectively by FeSO4 (5-50 µM) and became highly aggregated at pH 4.5, but not at pH 7.4. Cholesteryl esters decreased and after a pronounced lag 7-ketocholesterol increased greatly. Total hydroperoxides (measured by tri-iodide assay) increased up to 24 h and then decreased only slowly. The lipid composition after 12 h at pH 4.5 and 37°C was similar to that of LDL oxidised by copper at pH 7.4 and 4°C, i.e. rich in hydroperoxides but low in oxysterols. Previously oxidised LDL aggregated rapidly and spontaneously at pH 4.5, but not at pH 7.4. Ferrous was much more effective than ferric iron at oxidising LDL when added after the oxidation was already underway. The iron chelators diethylenetriaminepentaacetic acid and, to a lesser extent, desferrioxamine inhibited LDL oxidation when added during its initial stages, but were unable to prevent LDL aggregating after it had been partially oxidised. Surprisingly, desferrioxamine increased the rate of LDL modification when added late in the oxidation process. α-Tocopherol enrichment of LDL initially increased the oxidation of LDL, but inhibited it later. The presence of oxidised and highly aggregated lipid within lysosomes has the potential to perturb the function of these organelles and to promote atherosclerosis.