969 resultados para Probability distribution functions
Resumo:
Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.
Resumo:
This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
Habitat selection has been one of the main research topics in ecology for decades. Nevertheless, many aspects of habitat selection still need to be explored. In particular, previous studies have overlooked the importance of temporal variation in habitat selection and the value of including data on reproductive success in order to describe the best quality habitat for a species. We used data collected from radiocollared wolves in Yellowstone National Park (USA), between 1996 and 2008, to describe wolf habitat selection. In particular, we aimed to identify i) seasonal differences in wolf habitat selection, ii) factors influencing interannual variation in habitat selection, and iii) the effect of habitat selection on wolf reproductive success. We used probability density functions to describe wolf habitat use and habitat coverages to represent the habitat available to wolves. We used regression analysis to connect habitat use with habitat characteristics and habitat selection with reproductive success. Our most relevant result was discovering strong interannual variability in wolf habitat selection. This variability was in part explained by pack identity and differences in litter size and leadership of a pack between two years (summer) and in pack size and precipitation (winter). We also detected some seasonal differences. Wolves selected open habitats, intermediate elevations, intermediate distances from roads, and avoided steep slopes in late winter. They selected areas close to roads and avoided steep slopes in summer. In early winter, wolves selected wetlands, herbaceous and shrub vegetation types, and areas at intermediate elevation and distance from roads. Surprisingly, the habitat characteristics selected by wolves were not useful in predicting reproductive success. We hypothesize that interannual variability in wolf habitat selection may be too strong to detect effects on reproductive success. Moreover, prey availability and competitor pressure may also have an influence on wolf reproductive success, which we did not assess. This project demonstrated how important temporal variation is in shaping patterns of habitat selection. We still believe in the value of running long-term studies, but the effect of temporal variation should always be taken into account.
Analysis of spring break-up and its effects on a biomass feedstock supply chain in northern Michigan
Resumo:
Demand for bio-fuels is expected to increase, due to rising prices of fossil fuels and concerns over greenhouse gas emissions and energy security. The overall cost of biomass energy generation is primarily related to biomass harvesting activity, transportation, and storage. With a commercial-scale cellulosic ethanol processing facility in Kinross Township of Chippewa County, Michigan about to be built, models including a simulation model and an optimization model have been developed to provide decision support for the facility. Both models track cost, emissions and energy consumption. While the optimization model provides guidance for a long-term strategic plan, the simulation model aims to present detailed output for specified operational scenarios over an annual period. Most importantly, the simulation model considers the uncertainty of spring break-up timing, i.e., seasonal road restrictions. Spring break-up timing is important because it will impact the feasibility of harvesting activity and the time duration of transportation restrictions, which significantly changes the availability of feedstock for the processing facility. This thesis focuses on the statistical model of spring break-up used in the simulation model. Spring break-up timing depends on various factors, including temperature, road conditions and soil type, as well as individual decision making processes at the county level. The spring break-up model, based on the historical spring break-up data from 27 counties over the period of 2002-2010, starts by specifying the probability distribution of a particular county’s spring break-up start day and end day, and then relates the spring break-up timing of the other counties in the harvesting zone to the first county. In order to estimate the dependence relationship between counties, regression analyses, including standard linear regression and reduced major axis regression, are conducted. Using realizations (scenarios) of spring break-up generated by the statistical spring breakup model, the simulation model is able to probabilistically evaluate different harvesting and transportation plans to help the bio-fuel facility select the most effective strategy. For early spring break-up, which usually indicates a longer than average break-up period, more log storage is required, total cost increases, and the probability of plant closure increases. The risk of plant closure may be partially offset through increased use of rail transportation, which is not subject to spring break-up restrictions. However, rail availability and rail yard storage may then become limiting factors in the supply chain. Rail use will impact total cost, energy consumption, system-wide CO2 emissions, and the reliability of providing feedstock to the bio-fuel processing facility.
Resumo:
It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.
Resumo:
Let P be a probability distribution on q -dimensional space. The so-called Diaconis-Freedman effect means that for a fixed dimension d<distributions. The present paper provides necessary and sufficient conditions for this phenomenon in a suitable asymptotic framework with increasing dimension q . It turns out, that the conditions formulated by Diaconis and Freedman (1984) are not only sufficient but necessary as well. Moreover, letting P ^ be the empirical distribution of n independent random vectors with distribution P , we investigate the behavior of the empirical process n √ (P ^ −P) under random projections, conditional on P ^ .
Resumo:
We describe a technique for interactive rendering of diffraction effects produced by biological nanostructures such as snake skin surface gratings. Our approach uses imagery from atomic force microscopy that accurately captures the nanostructures responsible for structural coloration, that is, coloration due to wave interference, in a variety of animals. We develop a rendering technique that constructs bidirectional reflection distribution functions (BRDFs) directly from the measured data and leverages precomputation to achieve interactive performance. We demonstrate results of our approach using various shapes of the surface grating nanostructures. Finally, we evaluate the accuracy of our precomputation-based technique and compare to a reference BRDF construction technique.
Resumo:
Statistical physicists assume a probability distribution over micro-states to explain thermodynamic behavior. The question of this paper is whether these probabilities are part of a best system and can thus be interpreted as Humean chances. I consider two strategies, viz. a globalist as suggested by Loewer, and a localist as advocated by Frigg and Hoefer. Both strategies fail because the system they are part of have rivals that are roughly equally good, while ontic probabilities should be part of a clearly winning system. I conclude with the diagnosis that well-defined micro-probabilities under-estimate the robust character of explanations in statistical physics.
Resumo:
The inclusive jet cross-section has been measured in proton-proton collisions at root s = 2.76 TeV in a dataset corresponding to an integrated luminosity of 0.20 pb(-1) collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-k(t) algorithm with two radius parameters of 0.4 and 0.6. The inclusive jet double-differential cross-section is presented as a function of the jet transverse momentum p(T) and jet rapidity y, covering a range of 20 <= p(T) < 430 GeV and vertical bar y vertical bar < 4.4. The ratio of the cross-section to the inclusive jet cross-section measurement at root s = 7 TeV, published by the ATLAS Collaboration, is calculated as a function of both transverse momentum and the dimensionless quantity x(T) = 2p(T)/root s, in bins of jet rapidity. The systematic uncertainties on the ratios are significantly reduced due to the cancellation of correlated uncertainties in the two measurements. Results are compared to the prediction from next-to-leading order perturbative QCD calculations corrected for non-perturbative effects, and next-to-leading order Monte Carlo simulation. Furthermore, the ATLAS jet cross-section measurements at root s = 2.76 TeV and root s = 7 TeV are analysed within a framework of next-to-leading order perturbative QCD calculations to determine parton distribution functions of the proton, taking into account the correlations between the measurements.
Resumo:
We describe a technique for interactive rendering of diffraction effects produced by biological nanostructures, such as snake skin surface gratings. Our approach uses imagery from atomic force microscopy that accurately captures the geometry of the nanostructures responsible for structural colouration, that is, colouration due to wave interference, in a variety of animals. We develop a rendering technique that constructs bidirectional reflection distribution functions (BRDFs) directly from the measured data and leverages pre-computation to achieve interactive performance. We demonstrate results of our approach using various shapes of the surface grating nanostructures. Finally, we evaluate the accuracy of our pre-computation-based technique and compare to a reference BRDF construction technique.
Resumo:
Statistical physicists assume a probability distribution over micro-states to explain thermodynamic behavior. The question of this paper is whether these probabilities are part of a best system and can thus be interpreted as Humean chances. I consider two Boltzmannian accounts of the Second Law, viz.\ a globalist and a localist one. In both cases, the probabilities fail to be chances because they have rivals that are roughly equally good. I conclude with the diagnosis that well-defined micro-probabilities under-estimate the robust character of explanations in statistical physics.
Resumo:
Double-differential dijet cross-sections measured in pp collisions at the LHC with a 7TeV centre-of-mass energy are presented as functions of dijet mass and half the rapidity separation of the two highest-pT jets. These measurements are obtained using data corresponding to an integrated luminosity of 4.5 fb−1, recorded by the ATLAS detector in 2011. The data are corrected for detector effects so that cross-sections are presented at the particle level. Cross-sections are measured up to 5TeV dijet mass using jets reconstructed with the anti-kt algorithm for values of the jet radius parameter of 0.4 and 0.6. The cross-sections are compared with next-to-leading-order perturbative QCD calculations by NLOJet++ corrected to account for non-perturbative effects. Comparisons with POWHEG predictions, using a next-to-leading-order matrix element calculation interfaced to a partonshower Monte Carlo simulation, are also shown. Electroweak effects are accounted for in both cases. The quantitative comparison of data and theoretical predictions obtained using various parameterizations of the parton distribution functions is performed using a frequentist method. In general, good agreement with data is observed for the NLOJet++ theoretical predictions when using the CT10, NNPDF2.1 and MSTW 2008 PDF sets. Disagreement is observed when using the ABM11 and HERAPDF1.5 PDF sets for some ranges of dijet mass and half the rapidity separation. An example setting a lower limit on the compositeness scale for a model of contact interactions is presented, showing that the unfolded results can be used to constrain contributions to dijet production beyond that predicted by the Standard Model.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.