978 resultados para Log-normal degree distribution
Resumo:
To determine self-consistently the time evolution of particle size and their number density in situ multi-angle polarization-sensitive laser light scattering was used. Cross-polarization intensities (incident and scattered light intensities with opposite polarization) measured at 135 degrees and ex situ transmission electronic microscopy analysis demonstrate the existence of nonspherical agglomerates during the early phase of agglomeration. Later in the particle time development both techniques reveal spherical particles again. The presence of strong cross-polarization intensities is accompanied by low-frequency instabilities detected on the scattered light intensities and plasma emission. It is found that the particle radius and particle number density during the agglomeration phase can be well described by the Brownian free molecule coagulation model. Application of this neutral particle coagulation model is justified by calculation of the particle charge whereby it is shown that particles of a few tens of nanometer can be considered as neutral under our experimental conditions. The measured particle dispersion can be well described by a Brownian free molecule coagulation model including a log-normal particle size distribution. (C) 1996 American Institute of Physics.
Resumo:
One feature of the modern nutrition transition is the growing consumption of animal proteins. The most common approach in the quantitative analysis of this change used to be the study of averages of food consumption. But this kind of analysis seems to be incomplete without the knowledge of the number of consumers. Data about consumers are not usually published in historical statistics. This article introduces a methodological approach for reconstructing consumer populations. This methodology is based on some assumptions about the diffusion process of foodstuffs and the modeling of consumption patterns with a log-normal distribution. This estimating process is illustrated with the specific case of milk consumption in Spain between 1925 and 1981. These results fit quite well with other data and indirect sources available showing that this dietary change was a slow and late process. The reconstruction of consumer population could shed a new light in the study of nutritional transitions.
Resumo:
The exposure to dust and polynuclear aromatic hydrocarbons (PAH) of 15 truck drivers from Geneva, Switzerland, was measured. The drivers were divided between "long-distance" drivers and "local" drivers and between smokers and nonsmokers and were compared with a control group of 6 office workers who were also divided into smokers and nonsmokers. Dust was measured on 1 workday both by a direct-reading instrument and by sampling. The local drivers showed higher exposure to dust (0.3 mg/m3) and PAH than the long-distance drivers (0.1 mg/m3), who showed no difference with the control group. This observation may be due to the fact that the local drivers spend more time in more polluted areas, such as streets with heavy traffic and construction sites, than do the long-distance drivers. Smoking does not influence exposure to dust and PAH of professional truck drivers, as measured in this study, probably because the ventilation rate of the truck cabins is relatively high even during cold days (11-15 r/h). The distribution of dust concentrations was shown in some cases to be quite different from the expected log-normal distribution. The contribution of diesel exhaust to these exposures could not be estimated since no specific tracer was used. However, the relatively low level of dust exposure dose not support the hypothesis that present day levels of diesel exhaust particulates play a significant role in the excess occurrence of lung cancer observed in professional truck drivers.
Resumo:
We study the damage enhanced creep rupture of disordered materials by means of a fiber bundle model. Broken fibers undergo a slow stress relaxation modeled by a Maxwell element whose stress exponent m can vary in a broad range. Under global load sharing we show that due to the strength disorder of fibers, the lifetime ʧ of the bundle has sample-to-sample fluctuations characterized by a log-normal distribution independent of the type of disorder. We determine the Monkman-Grant relation of the model and establish a relation between the rupture life tʄ and the characteristic time tm of the intermediate creep regime of the bundle where the minimum strain rate is reached, making possible reliable estimates of ʧ from short term measurements. Approaching macroscopic failure, the deformation rate has a finite time power law singularity whose exponent is a decreasing function of m. On the microlevel the distribution of waiting times is found to have a power law behavior with m-dependent exponents different below and above the critical load of the bundle. Approaching the critical load from above, the cutoff value of the distributions has a power law divergence whose exponent coincides with the stress exponent of Maxwell elements
Resumo:
L'étude du mouvement des organismes est essentiel pour la compréhension du fonctionnement des écosystèmes. Dans le cas des écosystèmes marins exploités, cela amène à s'intéresser aux stratégies spatiales des pêcheurs. L'une des approches les plus utilisées pour la modélisation du mouvement des prédateurs supé- rieurs est la marche aléatoire de Lévy. Une marche aléatoire est un modèle mathématique composé par des déplacements aléatoires. Dans le cas de Lévy, les longueurs des déplacements suivent une loi stable de Lévy. Dans ce cas également, les longueurs, lorsqu'elles tendent vers l'in ni (in praxy lorsqu'elles sont grandes, grandes par rapport à la médiane ou au troisième quartile par exemple), suivent une loi puissance caractéristique du type de marche aléatoire de Lévy (Cauchy, Brownien ou strictement Lévy). Dans la pratique, outre que cette propriété est utilisée de façon réciproque sans fondement théorique, les queues de distribution, notion par ailleurs imprécise, sont modélisée par des lois puissances sans que soient discutées la sensibilité des résultats à la dé nition de la queue de distribution, et la pertinence des tests d'ajustement et des critères de choix de modèle. Dans ce travail portant sur les déplacements observés de trois bateaux de pêche à l'anchois du Pérou, plusieurs modèles de queues de distribution (log-normal, exponentiel, exponentiel tronqué, puissance et puissance tronqué) ont été comparés ainsi que deux dé nitions possible de queues de distribution (de la médiane à l'in ni ou du troisième quartile à l'in ni). Au plan des critères et tests statistiques utilisés, les lois tronquées (exponentielle et puissance) sont apparues les meilleures. Elles intègrent en outre le fait que, dans la pratique, les bateaux ne dépassent pas une certaine limite de longueur de déplacement. Le choix de modèle est apparu sensible au choix du début de la queue de distribution : pour un même bateau, le choix d'un modèle tronqué ou l'autre dépend de l'intervalle des valeurs de la variable sur lequel le modèle est ajusté. Pour nir, nous discutons les implications en écologie des résultats de ce travail.
Resumo:
To determine self‐consistently the time evolution of particle size and their number density in situ multi‐angle polarization‐sensitive laser light scattering was used. Cross‐polarization intensities (incident and scattered light intensities with opposite polarization) measured at 135° and ex situ transmission electronic microscopy analysis demonstrate the existence of nonspherical agglomerates during the early phase of agglomeration. Later in the particle time development both techniques reveal spherical particles again. The presence of strong cross‐polarization intensities is accompanied by low‐frequency instabilities detected on the scattered light intensities and plasma emission. It is found that the particle radius and particle number density during the agglomeration phase can be well described by the Brownian free molecule coagulation model. Application of this neutral particle coagulation model is justified by calculation of the particle charge whereby it is shown that particles of a few tens of nanometer can be considered as neutral under our experimental conditions. The measured particle dispersion can be well described by a Brownian free molecule coagulation model including a log‐normal particle size distribution.
Resumo:
Random scale-free networks have the peculiar property of being prone to the spreading of infections. Here we provide for the susceptible-infected-susceptible model an exact result showing that a scale-free degree distribution with diverging second moment is a sufficient condition to have null epidemic threshold in unstructured networks with either assortative or disassortative mixing. Degree correlations result therefore irrelevant for the epidemic spreading picture in these scale-free networks. The present result is related to the divergence of the average nearest neighbors degree, enforced by the degree detailed balance condition.
Resumo:
We present a generator of random networks where both the degree-dependent clustering coefficient and the degree distribution are tunable. Following the same philosophy as in the configuration model, the degree distribution and the clustering coefficient for each class of nodes of degree k are fixed ad hoc and a priori. The algorithm generates corresponding topologies by applying first a closure of triangles and second the classical closure of remaining free stubs. The procedure unveils an universal relation among clustering and degree-degree correlations for all networks, where the level of assortativity establishes an upper limit to the level of clustering. Maximum assortativity ensures no restriction on the decay of the clustering coefficient whereas disassortativity sets a stronger constraint on its behavior. Correlation measures in real networks are seen to observe this structural bound.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
This paper sets out to identify the initial positions of the different decisionmakers who intervene in a group decision making process with a reducednumber of actors, and to establish possible consensus paths between theseactors. As a methodological support, it employs one of the most widely-knownmulticriteria decision techniques, namely, the Analytic Hierarchy Process(AHP). Assuming that the judgements elicited by the decision makers follow theso-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al.,1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknownvariance, a Bayesian approach is used in the estimation of the relative prioritiesof the alternatives being compared. These priorities, estimated by way of themedian of the posterior distribution and normalised in a distributive manner(priorities add up to one), are a clear example of compositional data that will beused in the search for consensus between the actors involved in the resolution ofthe problem through the use of Multidimensional Scaling tools
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.
Resumo:
Microscopic visualization, especially in transparent micromodels, can provide valuable information to understand the transport phenomena at pore scale in different process occurring in porous materials (food, timber, soils, etc.). Micromodels studies focus mainly on the observation of multi-phase flow, which presents a greater proximity to reality. The aim of this study was to study the process of flexography and its application in the manufacture of polyester resin transparent micromodels and its application to carrots. Materials used to implement a flexo station for micromodels construction were thermoregulated water bath, exposure chamber to UV light, photosensitive substance (photopolymer), RTV silicone polyester resin, and glass plates. In this paper, data on size distribution of a particular kind of carrot we used, and a transparent micromodel with square cross-section as well as a Log-normal pore size distribution with pore radii ranging from 10 to 110 µm (average of 22 µm and micromodel size of 10 × 10 cm) were built. Finally, it stresses that it has successfully implemented the protocol processing 2D polyester resin transparent micromodels.
Resumo:
Le but de ce mémoire de maîtrise est de décrire les propriétés de la loi double Pareto-lognormale, de montrer comment on peut introduire des variables explicatives dans le modèle et de présenter son large potentiel d'applications dans le domaine de la science actuarielle et de la finance. Tout d'abord, nous donnons la définition de la loi double Pareto-lognormale et présentons certaines de ses propriétés basées sur les travaux de Reed et Jorgensen (2004). Les paramètres peuvent être estimés en utilisant la méthode des moments ou le maximum de vraisemblance. Ensuite, nous ajoutons une variable explicative à notre modèle. La procédure d'estimation des paramètres de ce mo-\\dèle est également discutée. Troisièmement, des applications numériques de notre modèle sont illustrées et quelques tests statistiques utiles sont effectués.
Resumo:
This paper sets out to identify the initial positions of the different decision makers who intervene in a group decision making process with a reduced number of actors, and to establish possible consensus paths between these actors. As a methodological support, it employs one of the most widely-known multicriteria decision techniques, namely, the Analytic Hierarchy Process (AHP). Assuming that the judgements elicited by the decision makers follow the so-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al., 1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknown variance, a Bayesian approach is used in the estimation of the relative priorities of the alternatives being compared. These priorities, estimated by way of the median of the posterior distribution and normalised in a distributive manner (priorities add up to one), are a clear example of compositional data that will be used in the search for consensus between the actors involved in the resolution of the problem through the use of Multidimensional Scaling tools