978 resultados para radial distribution functions
Resumo:
Background. We elaborated a model that predicts the centiles of the 25(OH)D distribution taking into account seasonal variation. Methods. Data from two Swiss population-based studies were used to generate (CoLaus) and validate (Bus Santé) the model. Serum 25(OH)D was measured by ultra high pressure LC-MS/MS and immunoassay. Linear regression models on square-root transformed 25(OH)D values were used to predict centiles of the 25(OH)D distribution. Distribution functions of the observations from the replication set predicted with the model were inspected to assess replication. Results. Overall, 4,912 and 2,537 Caucasians were included in original and replication sets, respectively. Mean (SD) 25(OH)D, age, BMI, and % of men were 47.5 (22.1) nmol/L, 49.8 (8.5) years, 25.6 (4.1) kg/m(2), and 49.3% in the original study. The best model included gender, BMI, and sin-cos functions of measurement day. Sex- and BMI-specific 25(OH)D centile curves as a function of measurement date were generated. The model estimates any centile of the 25(OH)D distribution for given values of sex, BMI, and date and the quantile corresponding to a 25(OH)D measurement. Conclusions. We generated and validated centile curves of 25(OH)D in the general adult Caucasian population. These curves can help rank vitamin D centile independently of when 25(OH)D is measured.
Resumo:
Tässä kandidaatintyössä on käsitelty sähkönjakelun luotettavuusanalysointia, laskentaa sekä luotettavuusindeksejä. Työssä on esitelty luotettavuusanalysoinnin perusperiaatteet ja keskitytty sähkönjakelun luotettavuuden tarkasteluun analyyttisin menetelmin ja yksinkertaisin esimerkein. Lisäksi on esitelty yleisimmät käytössä olevat luotettavuusindeksit. Luotettavuusanalysoinnilla on kasvava merkitys verkkoliiketoiminnan, sen sääntelyn sekä verkoston suunnittelun ja kehittämisen kannalta nyky-yhteiskunnassa, jossa vaaditaan yhä luotettavampaa ja keskeytyksettömämpää sähkönjakelua. Suomessa sähkönjakeluverkon säteittäisen käyttöperiaatteen myötä luotettavuuslaskentaan tarvittavat yhtälöt yksinkertaistuvat muutamaan perusyhtälöön. Tällöin jo yksinkertaisilla laskutoimituksilla voidaan huomioida keskeytyskustannusten vaikutus verkoston optimoinnissa sekä hyödyntää luotettavuusindeksejä verkkoyhtiöiden sääntelymallissa laatukannustimena. Luotettavuusanalysoinnin ja sen osana suoritettavan laskennan tuloksena saatavien luotettavuutta eri tavoin kuvaavien indeksien laaja kirjo ja niiden kirjava soveltaminen eri maissa vaikeuttavat indeksien perusteella tehtävää keskinäistä vertailua. Suomessa viranomainen julkaisee tilastoitavat ja tarkasteltavat indeksit, jotka palvelevat näin ollen viranomaisen sääntelyn tarpeita kotimaisella tasolla sekä toimivat verkoston kehittämisen tukena.
Resumo:
We propose a new family of risk measures, called GlueVaR, within the class of distortion risk measures. Analytical closed-form expressions are shown for the most frequently used distribution functions in financial and insurance applications. The relationship between Glue-VaR, Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) is explained. Tail-subadditivity is investigated and it is shown that some GlueVaR risk measures satisfy this property. An interpretation in terms of risk attitudes is provided and a discussion is given on the applicability in non-financial problems such as health, safety, environmental or catastrophic risk management
Resumo:
A new family of distortion risk measures -GlueVaR- is proposed in Belles- Sampera et al. -2013- to procure a risk assessment lying between those provided by common quantile-based risk measures. GlueVaR risk measures may be expressed as a combination of these standard risk measures. We show here that this relationship may be used to obtain approximations of GlueVaR measures for general skewed distribution functions using the Cornish-Fisher expansion. A subfamily of GlueVaR measures satisfies the tail-subadditivity property. An example of risk measurement based on real insurance claim data is presented, where implications of tail-subadditivity in the aggregation of risks are illustrated.
Resumo:
Problems related to fire hazard and fire management have become in recent decades one of the most relevant issues in the Wildland-Urban Interface (WUI), that is the area where human infrastructures meet or intermingle with natural vegetation. In this paper we develop a robust geospatial method for defining and mapping the WUI in the Alpine environment, where most interactions between infrastructures and wildland vegetation concern the fire ignition through human activities, whereas no significant threats exist for infrastructures due to contact with burning vegetation. We used the three Alpine Swiss cantons of Ticino, Valais and Grisons as the study area. The features representing anthropogenic infrastructures (urban or infrastructural components of the WUI) as well as forest cover related features (wildland component of the WUI) were selected from the Swiss Topographic Landscape Model (TLM3D). Georeferenced forest fire occurrences derived from the WSL Swissfire database were used to define suitable WUI interface distances. The Random Forest algorithm was applied to estimate the importance of predictor variables to fire ignition occurrence. This revealed that buildings and drivable roads are the most relevant anthropogenic components with respect to fire ignition. We consequently defined the combination of drivable roads and easily accessible (i.e. 100 m from the next drivable road) buildings as the WUI-relevant infrastructural component. For the definition of the interface (buffer) distance between WUI infrastructural and wildland components, we computed the empirical cumulative distribution functions (ECDF) of the percentage of ignition points (observed and simulated) arising at increasing distances from the selected infrastructures. The ECDF facilitates the calculation of both the distance at which a given percentage of ignition points occurred and, in turn, the amount of forest area covered at a given distance. Finally, we developed a GIS ModelBuilder routine to map the WUI for the selected buffer distance. The approach was found to be reproducible, robust (based on statistical analyses for evaluating parameters) and flexible (buffer distances depending on the targeted final area covered) so that fire managers may use it to detect WUI according to their specific priorities.
Resumo:
Neural Networks are a set of mathematical methods and computer programs designed to simulate the information process and the knowledge acquisition of the human brain. In last years its application in chemistry is increasing significantly, due the special characteristics for model complex systems. The basic principles of two types of neural networks, the multi-layer perceptrons and radial basis functions, are introduced, as well as, a pruning approach to architecture optimization. Two analytical applications based on near infrared spectroscopy are presented, the first one for determination of nitrogen content in wheat leaves using multi-layer perceptrons networks and second one for determination of BRIX in sugar cane juices using radial basis functions networks.
Resumo:
Tässä diplomityössä tutkitaan, miten verkkokaupan kävijävirran käyttäytymistä analysoimalla voidaan tehdä perusteltuja, tarkoituksenmukaisiin nimikkeisiin ja niiden parametreihin kohdistuvia päätöksiä tilanteessa, jossa laajamittaisemmat historiatiedot toteutuneesta myynnistä puuttuvat. Teoriakatsauksen perusteella muodostettiin ratkaisumalli, joka perustuu potentiaalisten kysyntäajurien muodostamiseen ja testaamiseen. Testisarjan perusteella valittavaa ajuria käytetään estimoimaan nimikkeiden kysyntää, jolloin sitä voidaan käyttää toteutuneen myynnin sijasta esimerkiksi Pareto-analyysissä. Näin huomio on mahdollista keskittää rajattuun määrään merkitykseltään suuria nimikkeitä ja niiden yksityiskohtaisiin parametreihin, joilla on merkitystä asiakkaan ostopäätöstilanteissa. Lisäksi voidaan tunnistaa nimikkeitä, joiden ongelmana on joko huono verkkonäkyvyys tai yhteensopimattomuus asiakastarpeiden kanssa. Ajurien testaamisperiaatteena käytetään kertymäfunktioiden yhdenmukaisuustarkastelua, joka rakentuu kolmesta peräkkäisestä vaiheesta; visuaalisesta tarkastelusta, kahden otoksen 2-suuntaisesta Kolmogorov-Smirnov-yhteensopivuustestistä ja Pearsonin korrelaatiotestistä. Mallia ja sen avulla tuotettua kysynnän ajuria testattiin veneilyalan kuluttaja-asiakkaille suunnatussa verkkokaupassa, jossa sillä tunnistettiin Pareto-jakauman alkupäästä runsaasti nimikkeitä, joiden parametreissa oli myynnin kannalta epäedullisia tekijöitä. Jakauman toisessa päässä tunnistettiin satoja nimikkeitä, joiden ongelmana on ilmeisesti joko huono verkkonäkyvyys tai nimikkeiden yhteensopimattomuus asiakastarpeiden kanssa.
Resumo:
Chlorhexidine is an effective antiseptic used widely in disinfecting products (hand soap), oral products (mouthwash), and is known to have potential applications in the textile industry. Chlorhexidine has been studied extensively through a biological and biochemical lens, showing evidence that it attacks the semipermeable membrane in bacterial cells. Although extremely lethal to bacterial cells, the present understanding of the exact mode of action of chlorhexidine is incomplete. A biophysical approach has been taken to investigate the potential location of chlorhexidine in the lipid bilayer. Deuterium nuclear magnetic resonance was used to characterize the molecular arrangement of mixed phospholipid/drug formulations. Powder spectra were analyzed using the de-Pake-ing technique, a method capable of extracting both the orientation distribution and the anisotropy distribution functions simultaneously. The results from samples of protonated phospholipids mixed with deuterium-labelled chlorhexidine are compared to those from samples of deuterated phospholipids and protonated chlorhexidine to determine its location in the lipid bilayer. A series of neutron scattering experiments were also conducted to study the biophysical interaction of chlorhexidine with a model phospholipid membrane of DMPC, a common saturated lipid found in bacterial cell membranes. The results found the hexamethylene linker to be located at the depth of the glycerol/phosphate region of the lipid bilayer. As drug concentration was increased in samples, a dramatic decrease in bilayer thickness was observed. Differential scanning calorimetry experiments have revealed a depression of the DMPC bilayer gel-to-lamellar phase transition temperature with an increasing drug concentration. The enthalpy of the transition remained the same for all drug concentrations, indicating a strictly drug/headgroup interaction, thus supporting the proposed location of chlorhexidine. In combination, these results lead to the hypothesis that the drug is folded approximately in half on its hexamethylene linker, with the hydrophobic linker at the depth of the glycerol/phosphate region of the lipid bilayer and the hydrophilic chlorophenyl groups located at the lipid headgroup. This arrangement seems to suggest that the drug molecule acts as a wedge to disrupt the bilayer. In vivo, this should make the cell membrane leaky, which is in agreement with a wide range of bacteriological observations.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Malgré une vaste littérature concernant les propriétés structurelles, électroniques et ther- modynamiques du silicium amorphe (a-Si), la structure microscopique de ce semi-cond- ucteur covalent échappe jusqu’à ce jour à une description exacte. Plusieurs questions demeurent en suspens, concernant par exemple la façon dont le désordre est distribué à travers la matrice amorphe : uniformément ou au sein de petites régions hautement déformées ? D’autre part, comment ce matériau relaxe-t-il : par des changements homo- gènes augmentant l’ordre à moyenne portée, par l’annihilation de défauts ponctuels ou par une combinaison de ces phénomènes ? Le premier article présenté dans ce mémoire propose une caractérisation des défauts de coordination, en terme de leur arrangement spatial et de leurs énergies de formation. De plus, les corrélations spatiales entre les défauts structurels sont examinées en se ba- sant sur un paramètre qui quantifie la probabilité que deux sites défectueux partagent un lien. Les géométries typiques associées aux atomes sous et sur-coordonnés sont extraites du modèle et décrites en utilisant les distributions partielles d’angles tétraédriques. L’in- fluence de la relaxation induite par le recuit sur les défauts structurels est également analysée. Le second article porte un regard sur la relation entre l’ordre à moyenne portée et la relaxation thermique. De récentes mesures expérimentales montrent que le silicium amorphe préparé par bombardement ionique, lorsque soumis à un recuit, subit des chan- gements structuraux qui laissent une signature dans la fonction de distribution radiale, et cela jusqu’à des distances correspondant à la troisième couche de voisins.[1, 2] Il n’est pas clair si ces changements sont une répercussion d’une augmentation de l’ordre à courte portée, ou s’ils sont réellement la manifestation d’un ordonnement parmi les angles dièdres, et cette section s’appuie sur des simulations numériques d’implantation ionique et de recuit, afin de répondre à cette question. D’autre part, les corrélations entre les angles tétraédriques et dièdres sont analysées à partir du modèle de a-Si.
Resumo:
In this paper, a comparison study among three neuralnetwork algorithms for the synthesis of array patterns is presented. The neural networks are used to estimate the array elements' excitations for an arbitrary pattern. The architecture of the neural networks is discussed and simulation results are presented. Two new neural networks, based on radial basis functions (RBFs) and wavelet neural networks (WNNs), are introduced. The proposed networks offer a more efficient synthesis procedure, as compared to other available techniques
Resumo:
We analyze how the spatial localization properties of pairing correlations are changing in a major neutron shell of heavy nuclei. It is shown that the radial distribution of the pairing density depends strongly on whether the chemical potential is close to a low or a high angular momentum level and has little sensitivity to whether the pairing force acts at the surface or in the bulk. The pairing density averaged over one major shell is, however, rather flat, exhibiting little dependence on the pairing force. Hartree-Fock-Bogoliubov calculations for the isotopic chain 100-132Sn are presented for demonstration purposes.
Resumo:
Quantile functions are efficient and equivalent alternatives to distribution functions in modeling and analysis of statistical data (see Gilchrist, 2000; Nair and Sankaran, 2009). Motivated by this, in the present paper, we introduce a quantile based Shannon entropy function. We also introduce residual entropy function in the quantile setup and study its properties. Unlike the residual entropy function due to Ebrahimi (1996), the residual quantile entropy function determines the quantile density function uniquely through a simple relationship. The measure is used to define two nonparametric classes of distributions
Resumo:
The progress in microsystem technology or nano technology places extended requirements to the fabrication processes. The trend is moving towards structuring within the nanometer scale on the one hand, and towards fabrication of structures with high aspect ratio (ratio of vertical vs. lateral dimensions) and large depths in the 100 µm scale on the other hand. Current procedures for the microstructuring of silicon are wet chemical etching and dry or plasma etching. A modern plasma etching technique for the structuring of silicon is the so-called "gas chopping" etching technique (also called "time-multiplexed etching"). In this etching technique, passivation cycles, which prevent lateral underetching of sidewalls, and etching cycles, which etch preferably in the vertical direction because of the sidewall passivation, are constantly alternated during the complete etching process. To do this, a CHF3/CH4 plasma, which generates CF monomeres is employed during the passivation cycle, and a SF6/Ar, which generates fluorine radicals and ions plasma is employed during the etching cycle. Depending on the requirements on the etched profile, the durations of the individual passivation and etching cycles are in the range of a few seconds up to several minutes. The profiles achieved with this etching process crucially depend on the flow of reactants, i.e. CF monomeres during the passivation cycle, and ions and fluorine radicals during the etching cycle, to the bottom of the profile, especially for profiles with high aspect ratio. With regard to the predictability of the etching processes, knowledge of the fundamental effects taking place during a gas chopping etching process, and their impact onto the resulting profile is required. For this purpose in the context of this work, a model for the description of the profile evolution of such etching processes is proposed, which considers the reactions (etching or deposition) at the sample surface on a phenomenological basis. Furthermore, the reactant transport inside the etching trench is modelled, based on angular distribution functions and on absorption probabilities at the sidewalls and bottom of the trench. A comparison of the simulated profiles with corresponding experimental profiles reveals that the proposed model reproduces the experimental profiles, if the angular distribution functions and absorption probabilities employed in the model is in agreement with data found in the literature. Therefor the model developed in the context of this work is an adequate description of the effects taking place during a gas chopping plasma etching process.