979 resultados para Density Function


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Carbon monoxide, the chief killer in fires, and other species are modelled for a series of enclosure fires. The conditions emulate building fires where CO is formed in the rich, turbulent, nonpremixed flame and is transported frozen to lean mixtures by the ceiling jet which is cooled by radiation and dilution. Conditional moment closure modelling is used and computational domain minimisation criteria are developed which reduce the computational cost of this method. The predictions give good agreement for CO and other species in the lean, quenched-gas stream, holding promise that this method may provide a practical means of modelling real, three-dimensional fire situations. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantum mechanics has been formulated in phase space, with the Wigner function as the representative of the quantum density operator, and classical mechanics has been formulated in Hilbert space, with the Groenewold operator as the representative of the classical Liouville density function. Semiclassical approximations to the quantum evolution of the Wigner function have been defined, enabling the quantum evolution to be approached from a classical starting point. Now analogous semiquantum approximations to the classical evolution of the Groenewold operator are defined, enabling the classical evolution to be approached from a quantum starting point. Simple nonlinear systems with one degree of freedom are considered, whose Hamiltonians are polynomials in the Hamiltonian of the simple harmonic oscillator. The behavior of expectation values of simple observables and of eigenvalues of the Groenewold operator are calculated numerically and compared for the various semiclassical and semiquantum approximations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work studied the structure-hepatic disposition relationships for cationic drugs of varying lipophilicity using a single-pass, in situ rat liver preparation. The lipophilicity among the cationic drugs studied in this work is in the following order: diltiazem. propranolol. labetalol. prazosin. antipyrine. atenolol. Parameters characterizing the hepatic distribution and elimination kinetics of the drugs were estimated using the multiple indicator dilution method. The kinetic model used to describe drug transport (the two-phase stochastic model) integrated cytoplasmic binding kinetics and belongs to the class of barrier-limited and space-distributed liver models. Hepatic extraction ratio (E) (0.30-0.92) increased with lipophilicity. The intracellular binding rate constant (k(on)) and the equilibrium amount ratios characterizing the slowly and rapidly equilibrating binding sites (K-S and K-R) increase with the lipophilicity of drug (k(on) : 0.05-0.35 s(-1); K-S : 0.61-16.67; K-R : 0.36-0.95), whereas the intracellular unbinding rate constant (k(off)) decreases with the lipophilicity of drug (0.081-0.021 s(-1)). The partition ratio of influx (k(in)) and efflux rate constant (k(out)), k(in)/k(out), increases with increasing pK(a) value of the drug [from 1.72 for antipyrine (pK(a) = 1.45) to 9.76 for propranolol (pK(a) = 9.45)], the differences in k(in/kout) for the different drugs mainly arising from ion trapping in the mitochondria and lysosomes. The value of intrinsic elimination clearance (CLint), permeation clearance (CLpT), and permeability-surface area product (PS) all increase with the lipophilicity of drug [CLint (ml . min(-1) . g(-1) of liver): 10.08-67.41; CLpT (ml . min(-1) . g(-1) of liver): 10.80-5.35; PS (ml . min(-1) . g(-1) of liver): 14.59-90.54]. It is concluded that cationic drug kinetics in the liver can be modeled using models that integrate the presence of cytoplasmic binding, a hepatocyte barrier, and a vascular transit density function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we study the n-fold multiplicative model involving Weibull distributions and examine some properties of the model. These include the shapes for the density and failure rate functions and the WPP plot. These allow one to decide if a given data set can be adequately modelled by the model. We also discuss the estimation of model parameters based on the WPP plot. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the last decades considerations about equipments' availability became an important issue, as well as its dependence on components characteristics such as reliability and maintainability. This is particularly of outstanding importance if one is dealing with high risk industrial equipments, where these factors play an important and fundamental role in risk management when safety or huge economic values are in discussion. As availability is a function of reliability, maintainability, and maintenance support activities, the main goal is to improve one or more of these factors. This paper intends to show how maintainability can influence availability and present a methodology to select the most important attributes for maintainability using a partial Multi Criteria Decision Making (pMCDM). Improvements in maintainability can be analyzed assuming it as a probability related with a restore probability density function [g(t)].

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Civil – Perfil de Estruturas

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Background: Morbid obesity is directly related to deterioration in cardiorespiratory capacity, including changes in cardiovascular autonomic modulation. Objective: This study aimed to assess the cardiovascular autonomic function in morbidly obese individuals. Methods: Cross-sectional study, including two groups of participants: Group I, composed by 50 morbidly obese subjects, and Group II, composed by 30 nonobese subjects. The autonomic function was assessed by heart rate variability in the time domain (standard deviation of all normal RR intervals [SDNN]; standard deviation of the normal R-R intervals [SDNN]; square root of the mean squared differences of successive R-R intervals [RMSSD]; and the percentage of interval differences of successive R-R intervals greater than 50 milliseconds [pNN50] than the adjacent interval), and in the frequency domain (high frequency [HF]; low frequency [LF]: integration of power spectral density function in high frequency and low frequency ranges respectively). Between-group comparisons were performed by the Student’s t-test, with a level of significance of 5%. Results: Obese subjects had lower values of SDNN (40.0 ± 18.0 ms vs. 70.0 ± 27.8 ms; p = 0.0004), RMSSD (23.7 ± 13.0 ms vs. 40.3 ± 22.4 ms; p = 0.0030), pNN50 (14.8 ± 10.4 % vs. 25.9 ± 7.2%; p = 0.0061) and HF (30.0 ± 17.5 Hz vs. 51.7 ± 25.5 Hz; p = 0.0023) than controls. Mean LF/HF ratio was higher in Group I (5.0 ± 2.8 vs. 1.0 ± 0.9; p = 0.0189), indicating changes in the sympathovagal balance. No statistical difference in LF was observed between Group I and Group II (50.1 ± 30.2 Hz vs. 40.9 ± 23.9 Hz; p = 0.9013). Conclusion: morbidly obese individuals have increased sympathetic activity and reduced parasympathetic activity, featuring cardiovascular autonomic dysfunction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The metropolitan spatial structure displays various patterns, sometimes monocentricity and sometimes multicentricity, which seems much more complicated than the exponential density function used in classic works such as Clark (1961), Muth (1969) or Mills (1973) among others, can effectively represent. It seems that a more flexible density function,such as cubic spline function (Anderson (1982), Zheng (1991), etc.) to describe the density-accessibility relationship is needed. Also, accessibility, the fundamental determinant of density variations, is only partly captured by the inclusion of distance to the city centre as an explanatory variable. Steen (1986) has proposed to correct that miss-especification by including an additional gradient for distance to the nearest transportation axis. In identifying the determinants of urban spatial structure in the context of inter-urban systems, some of the variables proposed by Muth (1969), Mills (1973) and Alperovich (1983) such as city age or population, make no sense in the case of a single urban system. All three criticism to the exponential density function and its determinants apply for the Barcelona Metropolitan Region, a polycentric conurbation structured on well defined transportation axes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.