944 resultados para Almost Optimal Density Function
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Civil – Perfil de Estruturas
Resumo:
Abstract Background: Morbid obesity is directly related to deterioration in cardiorespiratory capacity, including changes in cardiovascular autonomic modulation. Objective: This study aimed to assess the cardiovascular autonomic function in morbidly obese individuals. Methods: Cross-sectional study, including two groups of participants: Group I, composed by 50 morbidly obese subjects, and Group II, composed by 30 nonobese subjects. The autonomic function was assessed by heart rate variability in the time domain (standard deviation of all normal RR intervals [SDNN]; standard deviation of the normal R-R intervals [SDNN]; square root of the mean squared differences of successive R-R intervals [RMSSD]; and the percentage of interval differences of successive R-R intervals greater than 50 milliseconds [pNN50] than the adjacent interval), and in the frequency domain (high frequency [HF]; low frequency [LF]: integration of power spectral density function in high frequency and low frequency ranges respectively). Between-group comparisons were performed by the Student’s t-test, with a level of significance of 5%. Results: Obese subjects had lower values of SDNN (40.0 ± 18.0 ms vs. 70.0 ± 27.8 ms; p = 0.0004), RMSSD (23.7 ± 13.0 ms vs. 40.3 ± 22.4 ms; p = 0.0030), pNN50 (14.8 ± 10.4 % vs. 25.9 ± 7.2%; p = 0.0061) and HF (30.0 ± 17.5 Hz vs. 51.7 ± 25.5 Hz; p = 0.0023) than controls. Mean LF/HF ratio was higher in Group I (5.0 ± 2.8 vs. 1.0 ± 0.9; p = 0.0189), indicating changes in the sympathovagal balance. No statistical difference in LF was observed between Group I and Group II (50.1 ± 30.2 Hz vs. 40.9 ± 23.9 Hz; p = 0.9013). Conclusion: morbidly obese individuals have increased sympathetic activity and reduced parasympathetic activity, featuring cardiovascular autonomic dysfunction.
Resumo:
The metropolitan spatial structure displays various patterns, sometimes monocentricity and sometimes multicentricity, which seems much more complicated than the exponential density function used in classic works such as Clark (1961), Muth (1969) or Mills (1973) among others, can effectively represent. It seems that a more flexible density function,such as cubic spline function (Anderson (1982), Zheng (1991), etc.) to describe the density-accessibility relationship is needed. Also, accessibility, the fundamental determinant of density variations, is only partly captured by the inclusion of distance to the city centre as an explanatory variable. Steen (1986) has proposed to correct that miss-especification by including an additional gradient for distance to the nearest transportation axis. In identifying the determinants of urban spatial structure in the context of inter-urban systems, some of the variables proposed by Muth (1969), Mills (1973) and Alperovich (1983) such as city age or population, make no sense in the case of a single urban system. All three criticism to the exponential density function and its determinants apply for the Barcelona Metropolitan Region, a polycentric conurbation structured on well defined transportation axes.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
The geometry and connectivity of fractures exert a strong influence on the flow and transport properties of fracture networks. We present a novel approach to stochastically generate three-dimensional discrete networks of connected fractures that are conditioned to hydrological and geophysical data. A hierarchical rejection sampling algorithm is used to draw realizations from the posterior probability density function at different conditioning levels. The method is applied to a well-studied granitic formation using data acquired within two boreholes located 6 m apart. The prior models include 27 fractures with their geometry (position and orientation) bounded by information derived from single-hole ground-penetrating radar (GPR) data acquired during saline tracer tests and optical televiewer logs. Eleven cross-hole hydraulic connections between fractures in neighboring boreholes and the order in which the tracer arrives at different fractures are used for conditioning. Furthermore, the networks are conditioned to the observed relative hydraulic importance of the different hydraulic connections by numerically simulating the flow response. Among the conditioning data considered, constraints on the relative flow contributions were the most effective in determining the variability among the network realizations. Nevertheless, we find that the posterior model space is strongly determined by the imposed prior bounds. Strong prior bounds were derived from GPR measurements and helped to make the approach computationally feasible. We analyze a set of 230 posterior realizations that reproduce all data given their uncertainties assuming the same uniform transmissivity in all fractures. The posterior models provide valuable statistics on length scales and density of connected fractures, as well as their connectivity. In an additional analysis, effective transmissivity estimates of the posterior realizations indicate a strong influence of the DFN structure, in that it induces large variations of equivalent transmissivities between realizations. The transmissivity estimates agree well with previous estimates at the site based on pumping, flowmeter and temperature data.
Resumo:
Es discuteixen breument algunes consideracions sobre l'aplicació de la Teoria delsConjunts difusos a la Química quàntica. Es demostra aqui que molts conceptes químics associats a la teoria són adequats per ésser connectats amb l'estructura dels Conjunts difusos. També s'explica com algunes descripcions teoriques dels observables quàntics espotencien tractant-les amb les eines associades als esmentats Conjunts difusos. La funciódensitat es pren com a exemple de l'ús de distribucions de possibilitat al mateix temps queles distribucions de probabilitat quàntiques
Resumo:
The simplex, the sample space of compositional data, can be structured as a real Euclidean space. This fact allows to work with the coefficients with respect to an orthonormal basis. Over these coefficients we apply standard real analysis, inparticular, we define two different laws of probability trought the density function and we study their main properties
Resumo:
Partint de les definicions usuals de Mesures de Semblança Quàntica (MSQ), es considera la dependència d'aquestes mesures respecte de la superposició molecular. Pel cas particular en qnè els sistemes comparats siguin una molècula i un Àtom i que les mesures es calculin amb l'aproximació EASA, les MSQ esdevenen funcions de les tres coordenades de l'espai. Mantenint fixa una de les tres coordenades, es pot representar fàcilment la variació del valor de semblança en un pla determinat, i obtenir els anomenats mapes de semblança. En aquest article, es comparen els mapes de semblança obtinguts amb diferents MSQ per a sistemes senzills
Resumo:
En aquest treball es presenta l'ús de funcions de densitat electrònica de forat de Fermi per incrementar el paper que pren una regió molecular concreta, considerada com a responsable de la reactivitat molecular, tot i mantenir la mida de la funció de densitat original. Aquestes densitats s'utilitzen per fer mesures d'autosemblança molecular quàntica i es presenten com una alternativa a l'ús de fragments moleculars aillats en estudis de relació entre estructura i propietat. El treball es complementa amb un exemple pràctic, on es correlaciona l'autosemblanca molecular a partir de densitats modificades amb l'energia d'una reacció isodòsmica
Resumo:
The space subdivision in cells resulting from a process of random nucleation and growth is a subject of interest in many scientific fields. In this paper, we deduce the expected value and variance of these distributions while assuming that the space subdivision process is in accordance with the premises of the Kolmogorov-Johnson-Mehl-Avrami model. We have not imposed restrictions on the time dependency of nucleation and growth rates. We have also developed an approximate analytical cell size probability density function. Finally, we have applied our approach to the distributions resulting from solid phase crystallization under isochronal heating conditions
Resumo:
We use aggregate GDP data and within-country income shares for theperiod 1970-1998 to assign a level of income to each person in theworld. We then estimate the gaussian kernel density function for theworldwide distribution of income. We compute world poverty rates byintegrating the density function below the poverty lines. The $1/daypoverty rate has fallen from 20% to 5% over the last twenty five years.The $2/day rate has fallen from 44% to 18%. There are between 300 and500 million less poor people in 1998 than there were in the 70s.We estimate global income inequality using seven different popularindexes: the Gini coefficient, the variance of log-income, two ofAtkinson s indexes, the Mean Logarithmic Deviation, the Theil indexand the coefficient of variation. All indexes show a reduction in globalincome inequality between 1980 and 1998. We also find that most globaldisparities can be accounted for by across-country, not within-country,inequalities. Within-country disparities have increased slightly duringthe sample period, but not nearly enough to offset the substantialreduction in across-country disparities. The across-country reductionsin inequality are driven mainly, but not fully, by the large growth rateof the incomes of the 1.2 billion Chinese citizens. Unless Africa startsgrowing in the near future, we project that income inequalities willstart rising again. If Africa does not start growing, then China, India,the OECD and the rest of middle-income and rich countries diverge awayfrom it, and global inequality will rise. Thus, the aggregate GDP growthof the African continent should be the priority of anyone concerned withincreasing global income inequality.
Resumo:
A dynamical model based on a continuous addition of colored shot noises is presented. The resulting process is colored and non-Gaussian. A general expression for the characteristic function of the process is obtained, which, after a scaling assumption, takes on a form that is the basis of the results derived in the rest of the paper. One of these is an expansion for the cumulants, which are all finite, subject to mild conditions on the functions defining the process. This is in contrast with the Lévy distribution¿which can be obtained from our model in certain limits¿which has no finite moments. The evaluation of the spectral density and the form of the probability density function in the tails of the distribution shows that the model exhibits a power-law spectrum and long tails in a natural way. A careful analysis of the characteristic function shows that it may be separated into a part representing a Lévy process together with another part representing the deviation of our model from the Lévy process. This