985 resultados para bivariate distribution-functions
Resumo:
In this paper we construct a model for the simultaneous compaction by which clusters are restructured, and growth of clusters by pairwise coagulation. The model has the form of a multicomponent aggregation problem in which the components are cluster mass and cluster diameter. Following suitable approximations, exact explicit solutions are derived which may be useful for the verification of simulations of such systems. Numerical simulations are presented to illustrate typical behaviour and to show the accuracy of approximations made in deriving the model. The solutions are then simplified using asymptotic techniques to show the relevant timescales of the kinetic processes and elucidate the shape of the cluster distribution functions at large times.
Resumo:
We investigate key characteristics of Ca²⁺ puffs in deterministic and stochastic frameworks that all incorporate the cellular morphology of IP[subscript]3 receptor channel clusters. In a first step, we numerically study Ca²⁺ liberation in a three dimensional representation of a cluster environment with reaction-diffusion dynamics in both the cytosol and the lumen. These simulations reveal that Ca²⁺ concentrations at a releasing cluster range from 80 µM to 170 µM and equilibrate almost instantaneously on the time scale of the release duration. These highly elevated Ca²⁺ concentrations eliminate Ca²⁺ oscillations in a deterministic model of an IP[subscript]3R channel cluster at physiological parameter values as revealed by a linear stability analysis. The reason lies in the saturation of all feedback processes in the IP[subscript]3R gating dynamics, so that only fluctuations can restore experimentally observed Ca²⁺ oscillations. In this spirit, we derive master equations that allow us to analytically quantify the onset of Ca²⁺ puffs and hence the stochastic time scale of intracellular Ca²⁺ dynamics. Moving up the spatial scale, we suggest to formulate cellular dynamics in terms of waiting time distribution functions. This approach prevents the state space explosion that is typical for the description of cellular dynamics based on channel states and still contains information on molecular fluctuations. We illustrate this method by studying global Ca²⁺ oscillations.
Resumo:
Among different classes of ionic liquids (ILs), those with cyano-based anions have been of special interest due to their low viscosity and enhanced solvation ability for a large variety of compounds. Experimental results from this work reveal that the solubility of glucose in some of these ionic liquids may be higher than in water – a well-known solvent with enhanced capacity to dissolve mono- and disaccharides. This raises questions on the ability of cyano groups to establish strong hydrogen bonds with carbohydrates and on the optimal number of cyano groups at the IL anion that maximizes the solubility of glucose. In addition to experimental solubility data, these questions are addressed in this study using a combination of density functional theory (DFT) and molecular dynamics (MD) simulations. Through the calculation of the number of hydrogen bonds, coordination numbers, energies of interaction and radial and spatial distribution functions, it was possible to explain the experimental results and to show that the ability to favorably interact with glucose is driven by the polarity of each IL anion, with the optimal anion being dicyanamide.
Resumo:
For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.
Resumo:
The velocity function (VF) is a fundamental observable statistic of the galaxy population that is similar to the luminosity function in importance, but much more difficult to measure. In this work we present the first directly measured circular VF that is representative between 60 < v_circ < 320 km s^-1 for galaxies of all morphological types at a given rotation velocity. For the low-mass galaxy population (60 < v_circ < 170 km s^-1), we use the HI Parkes All Sky Survey VF. For the massive galaxy population (170 < v_circ < 320 km s^-1), we use stellar circular velocities from the Calar Alto Legacy Integral Field Area Survey (CALIFA). In earlier work we obtained the measurements of circular velocity at the 80% light radius for 226 galaxies and demonstrated that the CALIFA sample can produce volume-corrected galaxy distribution functions. The CALIFA VF includes homogeneous velocity measurements of both late and early-type rotation-supported galaxies and has the crucial advantage of not missing gas-poor massive ellipticals that HI surveys are blind to. We show that both VFs can be combined in a seamless manner, as their ranges of validity overlap. The resulting observed VF is compared to VFs derived from cosmological simulations of the z = 0 galaxy population. We find that dark-matter-only simulations show a strong mismatch with the observed VF. Hydrodynamic simulations fare better, but still do not fully reproduce observations.
Resumo:
In high-energy hadron collisions, the production at parton level of heavy-flavour quarks (charm and bottom) is described by perturbative Quantum Chromo-dynamics (pQCD) calculations, given the hard scale set by the quark masses. However, in hadron-hadron collisions, the predictions of the heavy-flavour hadrons eventually produced entail the knowledge of the parton distribution functions, as well as an accurate description of the hadronisation process. The latter is taken into account via the fragmentation functions measured at e$^+$e$^-$ colliders or in ep collisions, but several observations in LHC Run 1 and Run 2 data challenged this picture. In this dissertation, I studied the charm hadronisation in proton-proton collision at $\sqrt{s}$ = 13 TeV with the ALICE experiment at the LHC, making use of a large statistic data sample collected during LHC Run 2. The production of heavy-flavour in this collision system will be discussed, also describing various hadronisation models implemented in commonly used event generators, which try to reproduce experimental data, taking into account the unexpected results at LHC regarding the enhanced production of charmed baryons. The role of multiple parton interaction (MPI) will also be presented and how it affects the total charm production as a function of multiplicity. The ALICE apparatus will be described before moving to the experimental results, which are related to the measurement of relative production rates of the charm hadrons $\Sigma_c^{0,++}$ and $\Lambda_c^+$, which allow us to study the hadronisation mechanisms of charm quarks and to give constraints to different hadronisation models. Furthermore, the analysis of D mesons ($D^{0}$, $D^{+}$ and $D^{*+}$) as a function of charged-particle multiplicity and spherocity will be shown, investigating the role of multi-parton interactions. This research is relevant per se and for the mission of the ALICE experiment at the LHC, which is devoted to the study of Quark-Gluon Plasma.
Resumo:
The convection-dispersion model and its extended form have been used to describe solute disposition in organs and to predict hepatic availabilities. A range of empirical transit-time density functions has also been used for a similar purpose. The use of the dispersion model with mixed boundary conditions and transit-time density functions has been queried recently by Hisaka and Sugiyanaa in this journal. We suggest that, consistent with soil science and chemical engineering literature, the mixed boundary conditions are appropriate providing concentrations are defined in terms of flux to ensure continuity at the boundaries and mass balance. It is suggested that the use of the inverse Gaussian or other functions as empirical transit-time densities is independent of any boundary condition consideration. The mixed boundary condition solutions of the convection-dispersion model are the easiest to use when linear kinetics applies. In contrast, the closed conditions are easier to apply in a numerical analysis of nonlinear disposition of solutes in organs. We therefore argue that the use of hepatic elimination models should be based on pragmatic considerations, giving emphasis to using the simplest or easiest solution that will give a sufficiently accurate prediction of hepatic pharmacokinetics for a particular application. (C) 2000 Wiley-Liss Inc. and the American Pharmaceutical Association J Pharm Sci 89:1579-1586, 2000.
Resumo:
The influence of hole-hole (h-h) propagation in addition to the conventional particle-particle (p-p) propagation, on the energy per particle and the momentum distribution is investigated for the v2 central interaction which is derived from Reid¿s soft-core potential. The results are compared to Brueckner-Hartree-Fock calculations with a continuous choice for the single-particle (SP) spectrum. Calculation of the energy from a self-consistently determined SP spectrum leads to a lower saturation density. This result is not corroborated by calculating the energy from the hole spectral function, which is, however, not self-consistent. A generalization of previous calculations of the momentum distribution, based on a Goldstone diagram expansion, is introduced that allows the inclusion of h-h contributions to all orders. From this result an alternative calculation of the kinetic energy is obtained. In addition, a direct calculation of the potential energy is presented which is obtained from a solution of the ladder equation containing p-p and h-h propagation to all orders. These results can be considered as the contributions of selected Goldstone diagrams (including p-p and h-h terms on the same footing) to the kinetic and potential energy in which the SP energy is given by the quasiparticle energy. The results for the summation of Goldstone diagrams leads to a different momentum distribution than the one obtained from integrating the hole spectral function which in general gives less depletion of the Fermi sea. Various arguments, based partly on the results that are obtained, are put forward that a self-consistent determination of the spectral functions including the p-p and h-h ladder contributions (using a realistic interaction) will shed light on the question of nuclear saturation at a nonrelativistic level that is consistent with the observed depletion of SP orbitals in finite nuclei.
Resumo:
Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.
Resumo:
Student’s t-distribution has found various applications in mathematical statistics. One of the main properties of the t-distribution is to converge to the normal distribution as the number of samples tends to infinity. In this paper, by using a Cauchy integral we introduce a generalization of the t-distribution function with four free parameters and show that it converges to the normal distribution again. We provide a comprehensive treatment of mathematical properties of this new distribution. Moreover, since the Fisher F-distribution has a close relationship with the t-distribution, we also introduce a generalization of the F-distribution and prove that it converges to the chi-square distribution as the number of samples tends to infinity. Finally some particular sub-cases of these distributions are considered.
Resumo:
The study of the association between two random variables that have a joint normal distribution is of interest in applied statistics; for example, in statistical genetics. This article, targeted to applied statisticians, addresses inferences about the coefficient of correlation (ρ) in the bivariate normal and standard bivariate normal distributions using likelihood, frequentist, and Baycsian perspectives. Some results are surprising. For instance, the maximum likelihood estimator and the posterior distribution of ρ in the standard bivariate normal distribution do not follow directly from results for a general bivariate normal distribution. An example employing bootstrap and rejection sampling procedures is used to illustrate some of the peculiarities.