967 resultados para Marginal distribution costs


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Studies of chronic life-threatening diseases often involve both mortality and morbidity. In observational studies, the data may also be subject to administrative left truncation and right censoring. Since mortality and morbidity may be correlated and mortality may censor morbidity, the Lynden-Bell estimator for left truncated and right censored data may be biased for estimating the marginal survival function of the non-terminal event. We propose a semiparametric estimator for this survival function based on a joint model for the two time-to-event variables, which utilizes the gamma frailty specification in the region of the observable data. Firstly, we develop a novel estimator for the gamma frailty parameter under left truncation. Using this estimator, we then derive a closed form estimator for the marginal distribution of the non-terminal event. The large sample properties of the estimators are established via asymptotic theory. The methodology performs well with moderate sample sizes, both in simulations and in an analysis of data from a diabetes registry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Of the large clinical trials evaluating screening mammography efficacy, none included women ages 75 and older. Recommendations on an upper age limit at which to discontinue screening are based on indirect evidence and are not consistent. Screening mammography is evaluated using observational data from the SEER-Medicare linked database. Measuring the benefit of screening mammography is difficult due to the impact of lead-time bias, length bias and over-detection. The underlying conceptual model divides the disease into two stages: pre-clinical (T0) and symptomatic (T1) breast cancer. Treating the time in these phases as a pair of dependent bivariate observations, (t0,t1), estimates are derived to describe the distribution of this random vector. To quantify the effect of screening mammography, statistical inference is made about the mammography parameters that correspond to the marginal distribution of the symptomatic phase duration (T1). This shows the hazard ratio of death from breast cancer comparing women with screen-detected tumors to those detected at their symptom onset is 0.36 (0.30, 0.42), indicating a benefit among the screen-detected cases. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a simulation of the reduction of several components in trade cost for Asia and examines its impact on the economy. Our simulation model based on the new economic geography embraces seven sectors, including manufacturing and non-manufacturing sectors, and 1,715 regions in 18 countries/economies in Asia, in addition to the two economies of the US and the European Union. The geographical course of transactions among regions is modeled as determined based on firms’ modal choice. The model also includes estimates of some border cost measures such as tariff rates, non-tariff barriers, other border clearance costs, transshipment costs and so on. Our simulation analysis for Asia includes several scenarios involving the improvement/development of routes and the reduction of the above-mentioned border cost. We have shown that the contribution of physical and non-physical infrastructure improvements conducted together is larger than the sum of the contribution by each when conducted independently.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we study the univariate and bivariate truncated von Mises distribution, as a generalization of the von Mises distribution (\cite{jupp1989}), (\cite{mardia2000directional}). This implies the addition of two or four new truncation parameters in the univariate and, bivariate cases, respectively. The results include the definition, properties of the distribution and maximum likelihood estimators for the univariate and bivariate cases. Additionally, the analysis of the bivariate case shows how the conditional distribution is a truncated von Mises distribution, whereas the marginal distribution that generalizes the distribution introduced in \cite{repe}. From the viewpoint of applications, we test the distribution with simulated data, as well as with data regarding leaf inclination angles (\cite{safari}) and dihedral angles in protein chains (\cite{prote}). This research aims to assert this probability distribution as a potential option for modelling or simulating any kind of phenomena where circular distributions are applicable.\par

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La adecuada estimación de avenidas de diseño asociadas a altos periodos de retorno es necesaria para el diseño y gestión de estructuras hidráulicas como presas. En la práctica, la estimación de estos cuantiles se realiza normalmente a través de análisis de frecuencia univariados, basados en su mayoría en el estudio de caudales punta. Sin embargo, la naturaleza de las avenidas es multivariada, siendo esencial tener en cuenta características representativas de las avenidas, tales como caudal punta, volumen y duración del hidrograma, con el fin de llevar a cabo un análisis apropiado; especialmente cuando el caudal de entrada se transforma en un caudal de salida diferente durante el proceso de laminación en un embalse o llanura de inundación. Los análisis de frecuencia de avenidas multivariados han sido tradicionalmente llevados a cabo mediante el uso de distribuciones bivariadas estándar con el fin de modelar variables correlacionadas. Sin embargo, su uso conlleva limitaciones como la necesidad de usar el mismo tipo de distribuciones marginales para todas las variables y la existencia de una relación de dependencia lineal entre ellas. Recientemente, el uso de cópulas se ha extendido en hidrología debido a sus beneficios en relación al contexto multivariado, permitiendo superar los inconvenientes de las técnicas tradicionales. Una copula es una función que representa la estructura de dependencia de las variables de estudio, y permite obtener la distribución de frecuencia multivariada de dichas variables mediante sus distribuciones marginales, sin importar el tipo de distribución marginal utilizada. La estimación de periodos de retorno multivariados, y por lo tanto, de cuantiles multivariados, también se facilita debido a la manera en la que las cópulas están formuladas. La presente tesis doctoral busca proporcionar metodologías que mejoren las técnicas tradicionales usadas por profesionales para estimar cuantiles de avenida más adecuados para el diseño y la gestión de presas, así como para la evaluación del riesgo de avenida, mediante análisis de frecuencia de avenidas bivariados basados en cópulas. Las variables consideradas para ello son el caudal punta y el volumen del hidrograma. Con el objetivo de llevar a cabo un estudio completo, la presente investigación abarca: (i) el análisis de frecuencia de avenidas local bivariado centrado en examinar y comparar los periodos de retorno teóricos basados en la probabilidad natural de ocurrencia de una avenida, con el periodo de retorno asociado al riesgo de sobrevertido de la presa bajo análisis, con el fin de proporcionar cuantiles en una estación de aforo determinada; (ii) la extensión del enfoque local al regional, proporcionando un procedimiento completo para llevar a cabo un análisis de frecuencia de avenidas regional bivariado para proporcionar cuantiles en estaciones sin aforar o para mejorar la estimación de dichos cuantiles en estaciones aforadas; (iii) el uso de cópulas para investigar tendencias bivariadas en avenidas debido al aumento de los niveles de urbanización en una cuenca; y (iv) la extensión de series de avenida observadas mediante la combinación de los beneficios de un modelo basado en cópulas y de un modelo hidrometeorológico. Accurate design flood estimates associated with high return periods are necessary to design and manage hydraulic structures such as dams. In practice, the estimate of such quantiles is usually done via univariate flood frequency analyses, mostly based on the study of peak flows. Nevertheless, the nature of floods is multivariate, being essential to consider representative flood characteristics, such as flood peak, hydrograph volume and hydrograph duration to carry out an appropriate analysis; especially when the inflow peak is transformed into a different outflow peak during the routing process in a reservoir or floodplain. Multivariate flood frequency analyses have been traditionally performed by using standard bivariate distributions to model correlated variables, yet they entail some shortcomings such as the need of using the same kind of marginal distribution for all variables and the assumption of a linear dependence relation between them. Recently, the use of copulas has been extended in hydrology because of their benefits regarding dealing with the multivariate context, as they overcome the drawbacks of the traditional approach. A copula is a function that represents the dependence structure of the studied variables, and allows obtaining the multivariate frequency distribution of them by using their marginal distributions, regardless of the kind of marginal distributions considered. The estimate of multivariate return periods, and therefore multivariate quantiles, is also facilitated by the way in which copulas are formulated. The present doctoral thesis seeks to provide methodologies that improve traditional techniques used by practitioners, in order to estimate more appropriate flood quantiles for dam design, dam management and flood risk assessment, through bivariate flood frequency analyses based on the copula approach. The flood variables considered for that goal are peak flow and hydrograph volume. In order to accomplish a complete study, the present research addresses: (i) a bivariate local flood frequency analysis focused on examining and comparing theoretical return periods based on the natural probability of occurrence of a flood, with the return period associated with the risk of dam overtopping, to estimate quantiles at a given gauged site; (ii) the extension of the local to the regional approach, supplying a complete procedure for performing a bivariate regional flood frequency analysis to either estimate quantiles at ungauged sites or improve at-site estimates at gauged sites; (iii) the use of copulas to investigate bivariate flood trends due to increasing urbanisation levels in a catchment; and (iv) the extension of observed flood series by combining the benefits of a copula-based model and a hydro-meteorological model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la presente Tesis se ha llevado a cabo el contraste y desarrollo de metodologías que permitan mejorar el cálculo de las avenidas de proyecto y extrema empleadas en el cálculo de la seguridad hidrológica de las presas. En primer lugar se ha abordado el tema del cálculo de las leyes de frecuencia de caudales máximos y su extrapolación a altos periodos de retorno. Esta cuestión es de gran relevancia, ya que la adopción de estándares de seguridad hidrológica para las presas cada vez más exigentes, implica la utilización de periodos de retorno de diseño muy elevados cuya estimación conlleva una gran incertidumbre. Es importante, en consecuencia incorporar al cálculo de los caudales de diseño todas la técnicas disponibles para reducir dicha incertidumbre. Asimismo, es importante hacer una buena selección del modelo estadístico (función de distribución y procedimiento de ajuste) de tal forma que se garantice tanto su capacidad para describir el comportamiento de la muestra, como para predecir de manera robusta los cuantiles de alto periodo de retorno. De esta forma, se han realizado estudios a escala nacional con el objetivo de determinar el esquema de regionalización que ofrece mejores resultados para las características hidrológicas de las cuencas españolas, respecto a los caudales máximos anuales, teniendo en cuenta el numero de datos disponibles. La metodología utilizada parte de la identificación de regiones homogéneas, cuyos límites se han determinado teniendo en cuenta las características fisiográficas y climáticas de las cuencas, y la variabilidad de sus estadísticos, comprobando posteriormente su homogeneidad. A continuación, se ha seleccionado el modelo estadístico de caudales máximos anuales con un mejor comportamiento en las distintas zonas de la España peninsular, tanto para describir los datos de la muestra como para extrapolar a los periodos de retorno más altos. El proceso de selección se ha basado, entre otras cosas, en la generación sintética de series de datos mediante simulaciones de Monte Carlo, y el análisis estadístico del conjunto de resultados obtenido a partir del ajuste de funciones de distribución a estas series bajo distintas hipótesis. Posteriormente, se ha abordado el tema de la relación caudal-volumen y la definición de los hidrogramas de diseño en base a la misma, cuestión que puede ser de gran importancia en el caso de presas con grandes volúmenes de embalse. Sin embargo, los procedimientos de cálculo hidrológico aplicados habitualmente no tienen en cuenta la dependencia estadística entre ambas variables. En esta Tesis se ha desarrollado un procedimiento para caracterizar dicha dependencia estadística de una manera sencilla y robusta, representando la función de distribución conjunta del caudal punta y el volumen en base a la función de distribución marginal del caudal punta y la función de distribución condicionada del volumen respecto al caudal. Esta última se determina mediante una función de distribución log-normal, aplicando un procedimiento de ajuste regional. Se propone su aplicación práctica a través de un procedimiento de cálculo probabilístico basado en la generación estocástica de un número elevado de hidrogramas. La aplicación a la seguridad hidrológica de las presas de este procedimiento requiere interpretar correctamente el concepto de periodo de retorno aplicado a variables hidrológicas bivariadas. Para ello, se realiza una propuesta de interpretación de dicho concepto. El periodo de retorno se entiende como el inverso de la probabilidad de superar un determinado nivel de embalse. Al relacionar este periodo de retorno con las variables hidrológicas, el hidrograma de diseño de la presa deja de ser un único hidrograma para convertirse en una familia de hidrogramas que generan un mismo nivel máximo en el embalse, representados mediante una curva en el plano caudal volumen. Esta familia de hidrogramas de diseño depende de la propia presa a diseñar, variando las curvas caudal-volumen en función, por ejemplo, del volumen de embalse o la longitud del aliviadero. El procedimiento propuesto se ilustra mediante su aplicación a dos casos de estudio. Finalmente, se ha abordado el tema del cálculo de las avenidas estacionales, cuestión fundamental a la hora de establecer la explotación de la presa, y que puede serlo también para estudiar la seguridad hidrológica de presas existentes. Sin embargo, el cálculo de estas avenidas es complejo y no está del todo claro hoy en día, y los procedimientos de cálculo habitualmente utilizados pueden presentar ciertos problemas. El cálculo en base al método estadístico de series parciales, o de máximos sobre un umbral, puede ser una alternativa válida que permite resolver esos problemas en aquellos casos en que la generación de las avenidas en las distintas estaciones se deba a un mismo tipo de evento. Se ha realizado un estudio con objeto de verificar si es adecuada en España la hipótesis de homogeneidad estadística de los datos de caudal de avenida correspondientes a distintas estaciones del año. Asimismo, se han analizado los periodos estacionales para los que es más apropiado realizar el estudio, cuestión de gran relevancia para garantizar que los resultados sean correctos, y se ha desarrollado un procedimiento sencillo para determinar el umbral de selección de los datos de tal manera que se garantice su independencia, una de las principales dificultades en la aplicación práctica de la técnica de las series parciales. Por otra parte, la aplicación practica de las leyes de frecuencia estacionales requiere interpretar correctamente el concepto de periodo de retorno para el caso estacional. Se propone un criterio para determinar los periodos de retorno estacionales de forma coherente con el periodo de retorno anual y con una distribución adecuada de la probabilidad entre las distintas estaciones. Por último, se expone un procedimiento para el cálculo de los caudales estacionales, ilustrándolo mediante su aplicación a un caso de estudio. The compare and develop of a methodology in order to improve the extreme flow estimation for dam hydrologic security has been developed. First, the work has been focused on the adjustment of maximum peak flows distribution functions from which to extrapolate values for high return periods. This has become a major issue as the adoption of stricter standards on dam hydrologic security involves estimation of high design return periods which entails great uncertainty. Accordingly, it is important to incorporate all available techniques for the estimation of design peak flows in order to reduce this uncertainty. Selection of the statistical model (distribution function and adjustment method) is also important since its ability to describe the sample and to make solid predictions for high return periods quantiles must be guaranteed. In order to provide practical application of previous methodologies, studies have been developed on a national scale with the aim of determining a regionalization scheme which features best results in terms of annual maximum peak flows for hydrologic characteristics of Spanish basins taking into account the length of available data. Applied methodology starts with the delimitation of regions taking into account basin’s physiographic and climatic characteristics and the variability of their statistical properties, and continues with their homogeneity testing. Then, a statistical model for maximum annual peak flows is selected with the best behaviour for the different regions in peninsular Spain in terms of describing sample data and making solid predictions for high return periods. This selection has been based, among others, on synthetic data series generation using Monte Carlo simulations and statistical analysis of results from distribution functions adjustment following different hypothesis. Secondly, the work has been focused on the analysis of the relationship between peak flow and volume and how to define design flood hydrographs based on this relationship which can be highly important for large volume reservoirs. However, commonly used hydrologic procedures do not take statistical dependence between these variables into account. A simple and sound method for statistical dependence characterization has been developed by the representation of a joint distribution function of maximum peak flow and volume which is based on marginal distribution function of peak flow and conditional distribution function of volume for a given peak flow. The last one is determined by a regional adjustment procedure of a log-normal distribution function. Practical application is proposed by a probabilistic estimation procedure based on stochastic generation of a large number of hydrographs. The use of this procedure for dam hydrologic security requires a proper interpretation of the return period concept applied to bivariate hydrologic data. A standard is proposed in which it is understood as the inverse of the probability of exceeding a determined reservoir level. When relating return period and hydrological variables the only design flood hydrograph changes into a family of hydrographs which generate the same maximum reservoir level and that are represented by a curve in the peak flow-volume two-dimensional space. This family of design flood hydrographs depends on the dam characteristics as for example reservoir volume or spillway length. Two study cases illustrate the application of the developed methodology. Finally, the work has been focused on the calculation of seasonal floods which are essential when determining the reservoir operation and which can be also fundamental in terms of analysing the hydrologic security of existing reservoirs. However, seasonal flood calculation is complex and nowadays it is not totally clear. Calculation procedures commonly used may present certain problems. Statistical partial duration series, or peaks over threshold method, can be an alternative approach for their calculation that allow to solve problems encountered when the same type of event is responsible of floods in different seasons. A study has been developed to verify the hypothesis of statistical homogeneity of peak flows for different seasons in Spain. Appropriate seasonal periods have been analyzed which is highly relevant to guarantee correct results. In addition, a simple procedure has been defined to determine data selection threshold on a way that ensures its independency which is one of the main difficulties in practical application of partial series. Moreover, practical application of seasonal frequency laws requires a correct interpretation of the concept of seasonal return period. A standard is proposed in order to determine seasonal return periods coherently with the annual return period and with an adequate seasonal probability distribution. Finally a methodology is proposed to calculate seasonal peak flows. A study case illustrates the application of the proposed methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Poster presented in the 24th European Symposium on Computer Aided Process Engineering (ESCAPE 24), Budapest, Hungary, June 15-18, 2014.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we analyze the effect of incorporating life cycle inventory (LCI) uncertainty on the multi-objective optimization of chemical supply chains (SC) considering simultaneously their economic and environmental performance. To this end, we present a stochastic multi-scenario mixed-integer linear programming (MILP) coupled with a two-step transformation scenario generation algorithm with the unique feature of providing scenarios where the LCI random variables are correlated and each one of them has the desired lognormal marginal distribution. The environmental performance is quantified following life cycle assessment (LCA) principles, which are represented in the model formulation through standard algebraic equations. The capabilities of our approach are illustrated through a case study of a petrochemical supply chain. We show that the stochastic solution improves the economic performance of the SC in comparison with the deterministic one at any level of the environmental impact, and moreover the correlation among environmental burdens provides more realistic scenarios for the decision making process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The changing business environment has sharpened the focus on the need for robust approaches to supply chain management (SCM) and the improvement of supply chain capability and performance. This is particularly the case in Ireland, which has the natural disadvantage of a location peripheral to significant markets and sources of raw materials which results in relatively high transport and distribution costs. Therefore, in order to gain insights into current levels of diffusion of SCM, a survey was conducted among 776 firms in the Republic of Ireland. The empirical results suggest that there is a need for more widespread adoption of SCM among Irish firms. This is particularly the case in relation to the four main elements of SCM excellence reported in this paper. The design of supply chain solutions is a highly skilled, knowledge-intensive and complex activity, reflected in a shift from 'box moving' to the design and implementation of customised supply chain solutions. Education and training needs to be addressed by stimulating the development of industry-relevant logistics and SCM resources and skills.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vendor-managed inventory (VMI) is a widely used collaborative inventory management policy in which manufacturers manages the inventory of retailers and takes responsibility for making decisions related to the timing and extent of inventory replenishment. VMI partnerships help organisations to reduce demand variability, inventory holding and distribution costs. This study provides empirical evidence that significant economic benefits can be achieved with the use of a genetic algorithm (GA)-based decision support system (DSS) in a VMI supply chain. A two-stage serial supply chain in which retailers and their supplier are operating VMI in an uncertain demand environment is studied. Performance was measured in terms of cost, profit, stockouts and service levels. The results generated from GA-based model were compared to traditional alternatives. The study found that the GA-based approach outperformed traditional methods and its use can be economically justified in small- and medium-sized enterprises (SMEs).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Principal component analysis (PCA) is well recognized in dimensionality reduction, and kernel PCA (KPCA) has also been proposed in statistical data analysis. However, KPCA fails to detect the nonlinear structure of data well when outliers exist. To reduce this problem, this paper presents a novel algorithm, named iterative robust KPCA (IRKPCA). IRKPCA works well in dealing with outliers, and can be carried out in an iterative manner, which makes it suitable to process incremental input data. As in the traditional robust PCA (RPCA), a binary field is employed for characterizing the outlier process, and the optimization problem is formulated as maximizing marginal distribution of a Gibbs distribution. In this paper, this optimization problem is solved by stochastic gradient descent techniques. In IRKPCA, the outlier process is in a high-dimensional feature space, and therefore kernel trick is used. IRKPCA can be regarded as a kernelized version of RPCA and a robust form of kernel Hebbian algorithm. Experimental results on synthetic data demonstrate the effectiveness of IRKPCA. © 2010 Taylor & Francis.