876 resultados para DISTRIBUTION MODELS
Resumo:
Despite the extensive implementation of Superstreets on congested arterials, reliable methodologies for such designs remain unavailable. The purpose of this research is to fill the information gap by offering reliable tools to assist traffic professionals in the design of Superstreets with and without signal control. The entire tool developed in this thesis consists of three models. The first model is used to determine the minimum U-turn offset length for an Un-signalized Superstreet, given the arterial headway distribution of the traffic flows and the distribution of critical gaps among drivers. The second model is designed to estimate the queue size and its variation on each critical link in a signalized Superstreet, based on the given signal plan and the range of observed volumes. Recognizing that the operational performance of a Superstreet cannot be achieved without an effective signal plan, the third model is developed to produce a signal optimization method that can generate progression offsets for heavy arterial flows moving into and out of such an intersection design.
Resumo:
Pressure management (PM) is commonly used in water distribution systems (WDSs). In the last decade, a strategic objective in the field has been the development of new scientific and technical methods for its implementation. However, due to a lack of systematic analysis of the results obtained in practical cases, progress has not always been reflected in practical actions. To address this problem, this paper provides a comprehensive analysis of the most innovative issues related to PM. The methodology proposed is based on a case-study comparison of qualitative concepts that involves published work from 140 sources. The results include a qualitative analysis covering four aspects: (1) the objectives yielded by PM; (2) types of regulation, including advanced control systems through electronic controllers; (3) new methods for designing districts; and (4) development of optimization models associated with PM. The evolution of the aforementioned four aspects is examined and discussed. Conclusions regarding the current status of each factor are drawn and proposals for future research outlined
Resumo:
Doutoramento em Gestão
Resumo:
For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.
Resumo:
Understanding the factors that affect seagrass meadows encompassing their entire range of distribution is challenging yet important for their conservation. We model the environmental niche of Cymodocea nodosa using a combination of environmental variables and landscape metrics to examine factors defining its distribution and find suitable habitats for the species. The most relevant environmental variables defining the distribution of C. nodosa were sea surface temperature (SST) and salinity. We found suitable habitats at SST from 5.8 ºC to 26.4 ºC and salinity ranging from 17.5 to 39.3. Optimal values of mean winter wave height ranged between 1.2 m and 1.5 m, while waves higher than 2.5 m seemed to limit the presence of the species. The influence of nutrients and pH, despite having weight on the models, was not so clear in terms of ranges that confine the distribution of the species. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. By contrasting predictive approaches, we defined the variables affecting the distributional areas that seem unsuitable for C. nodosa as well as those suitable habitats not occupied by the species. These findings are encouraging for its use in future studies on climate-related marine range shifts and meadow restoration projects of these fragile ecosystems.
Resumo:
Earth climate has changed significantly in the last century and the different models indicate that it will continue to change over the next decades, even if the emission of greenhouse gases stop immediately. These changes have impact on different plant populations, as well as in the actual distribution of several species. As plants, in general, have a smaller capacity of dispersion compared with the animals it is likely that they will suffer the impacts of the climate change more intensively.
Resumo:
Ecological models written in a mathematical language L(M) or model language, with a given style or methodology can be considered as a text. It is possible to apply statistical linguistic laws and the experimental results demonstrate that the behaviour of a mathematical model is the same of any literary text of any natural language. A text has the following characteristics: (a) the variables, its transformed functions and parameters are the lexic units or LUN of ecological models; (b) the syllables are constituted by a LUN, or a chain of them, separated by operating or ordering LUNs; (c) the flow equations are words; and (d) the distribution of words (LUM and CLUN) according to their lengths is based on a Poisson distribution, the Chebanov's law. It is founded on Vakar's formula, that is calculated likewise the linguistic entropy for L(M). We will apply these ideas over practical examples using MARIOLA model. In this paper it will be studied the problem of the lengths of the simple lexic units composed lexic units and words of text models, expressing these lengths in number of the primitive symbols, and syllables. The use of these linguistic laws renders it possible to indicate the degree of information given by an ecological model.
Resumo:
The particulate matter distribution (PM) trends that exist in catalyzed particulate filters (CPFs) after loading, passive oxidation, active regeneration, and post loading conditions are not clearly understood. These data are required to optimize the operation of CPFs, prevent damage to the CPFs caused by non-uniform distributions, and develop accurate CPF models. To develop an understanding of PM distribution trends, multiple tests were conducted and the PM distribution was measured in three dimensions using a terahertz wave scanner. The results of this work indicate that loading, passive oxidation, active regeneration, and post loading can all cause non-uniform PM distributions. The density of the PM in the substrate after loading and the amount of PM that is oxidized during passive oxidations and active regenerations affect the uniformity of the distribution. Post loading that occurs after active regenerations result in distributions that are less uniform than post loading that occurs after passive oxidations.
Resumo:
Multivariate normal distribution is commonly encountered in any field, a frequent issue is the missing values in practice. The purpose of this research was to estimate the parameters in three-dimensional covariance permutation-symmetric normal distribution with complete data and all possible patterns of incomplete data. In this study, MLE with missing data were derived, and the properties of the MLE as well as the sampling distributions were obtained. A Monte Carlo simulation study was used to evaluate the performance of the considered estimators for both cases when ρ was known and unknown. All results indicated that, compared to estimators in the case of omitting observations with missing data, the estimators derived in this article led to better performance. Furthermore, when ρ was unknown, using the estimate of ρ would lead to the same conclusion.
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.
Resumo:
In restructured power systems, generation and commercialization activities became market activities, while transmission and distribution activities continue as regulated monopolies. As a result, the adequacy of transmission network should be evaluated independent of generation system. After introducing the constrained fuzzy power flow (CFPF) as a suitable tool to quantify the adequacy of transmission network to satisfy 'reasonable demands for the transmission of electricity' (as stated, for instance, at European Directive 2009/72/EC), the aim is now showing how this approach can be used in conjunction with probabilistic criteria in security analysis. In classical security analysis models of power systems are considered the composite system (generation plus transmission). The state of system components is usually modeled with probabilities and loads (and generation) are modeled by crisp numbers, probability distributions or fuzzy numbers. In the case of CFPF the component’s failure of the transmission network have been investigated. In this framework, probabilistic methods are used for failures modeling of the transmission system components and possibility models are used to deal with 'reasonable demands'. The enhanced version of the CFPF model is applied to an illustrative case.
Resumo:
In this paper, the IEEE 14 bus test system is used in order to perform adequacy assessment of a transmission system when large scale integration of electric vehicles is considered at distribution levels. In this framework, the symmetric/constr ained fuzzy power flow (SFPF/CFPF) was proposed. The SFPF/CFPF models are suitable to quantify the adequacy of transmission network to satisfy “reasonable demands for the transmission of electricity” as defined, for instance, in the European Directive 2009/72/EC. In this framework, electric vehicles of different types will be treated as fuzzy loads configuring part of the “reasonable demands”. With this study, it is also intended to show how to evaluate the amount of EVs that can be safely accommodated to the grid meeting a certain adequacy level.
Resumo:
The diaphragm is the primary inspiratory pump muscle of breathing. Notwithstanding its critical role in pulmonary ventilation, the diaphragm like other striated muscles is malleable in response to physiological and pathophysiological stressors, with potential implications for the maintenance of respiratory homeostasis. This review considers hypoxic adaptation of the diaphragm muscle, with a focus on functional, structural, and metabolic remodeling relevant to conditions such as high altitude and chronic respiratory disease. On the basis of emerging data in animal models, we posit that hypoxia is a significant driver of respiratory muscle plasticity, with evidence suggestive of both compensatory and deleterious adaptations in conditions of sustained exposure to low oxygen. Cellular strategies driving diaphragm remodeling during exposure to sustained hypoxia appear to confer hypoxic tolerance at the expense of peak force-generating capacity, a key functional parameter that correlates with patient morbidity and mortality. Changes include, but are not limited to: redox-dependent activation of hypoxia-inducible factor (HIF) and MAP kinases; time-dependent carbonylation of key metabolic and functional proteins; decreased mitochondrial respiration; activation of atrophic signaling and increased proteolysis; and altered functional performance. Diaphragm muscle weakness may be a signature effect of sustained hypoxic exposure. We discuss the putative role of reactive oxygen species as mediators of both advantageous and disadvantageous adaptations of diaphragm muscle to sustained hypoxia, and the role of antioxidants in mitigating adverse effects of chronic hypoxic stress on respiratory muscle function.
Resumo:
Myocardial fibrosis detected via delayed-enhanced magnetic resonance imaging (MRI) has been shown to be a strong indicator for ventricular tachycardia (VT) inducibility. However, little is known regarding how inducibility is affected by the details of the fibrosis extent, morphology, and border zone configuration. The objective of this article is to systematically study the arrhythmogenic effects of fibrosis geometry and extent, specifically on VT inducibility and maintenance. We present a set of methods for constructing patient-specific computational models of human ventricles using in vivo MRI data for patients suffering from hypertension, hypercholesterolemia, and chronic myocardial infarction. Additional synthesized models with morphologically varied extents of fibrosis and gray zone (GZ) distribution were derived to study the alterations in the arrhythmia induction and reentry patterns. Detailed electrophysiological simulations demonstrated that (1) VT morphology was highly dependent on the extent of fibrosis, which acts as a structural substrate, (2) reentry tended to be anchored to the fibrosis edges and showed transmural conduction of activations through narrow channels formed within fibrosis, and (3) increasing the extent of GZ within fibrosis tended to destabilize the structural reentry sites and aggravate the VT as compared to fibrotic regions of the same size and shape but with lower or no GZ. The approach and findings represent a significant step toward patient-specific cardiac modeling as a reliable tool for VT prediction and management of the patient. Sensitivities to approximation nuances in the modeling of structural pathology by image-based reconstruction techniques are also implicated.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.