886 resultados para Finite-time stochastic stability
Resumo:
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Glossoscolex paulistus hemoglobin (HbGp) was studied by dynamic light scattering (DLS), optical absorption spectroscopy (UV-VIS) and differential scanning calorimetry (DSC). At pH 7.0, cyanomet-HbGp is very stable, no oligomeric dissociation is observed, while denaturation occurs at 56 degrees C, 4 degrees C higher as compared to oxy-HbGp. The oligomeric dissociation of HbGp occurs simultaneously with some protein aggregation. Kinetic studies for oxy-HbGp using UV-VIS and DES allowed to obtain activation energy (E(a)) values of 278-262 kJ/mol (DES) and 333 kJ/mol (UV-VIS). Complimentary DSC studies indicate that the denaturation is irreversible, giving endotherms strongly dependent upon the heating scan rates, suggesting a kinetically controlled process. Dependence on protein concentration suggests that the two components in the endotherms are due to oligomeric dissociation effect upon denaturation. Activation energies are in the range 200-560 kJ/mol. The mid-point transition temperatures were in the range 50-65 degrees C. Cyanomet-HbGp shows higher mid-point temperatures as well as activation energies, consistent with its higher stability. DSC data are reported for the first time for an extracellular hemoglobin. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This study aims to investigate the relation between foreign direct investment (FDI) and per capita gross domestic product (GDP) in Pakistan. The study is based on a basic Cobb-Douglas production function. Population over age 15 to 64 is used as a proxy for labor in the investigation. The other variables used are gross capital formation, technological gap and a dummy variable measuring among other things political stability. We find positive correlation between GDP per capita in Pakistan and two variables, FDI and population over age 15 to 64. The GDP gap (gap between GDP of USA and GDP of Pakistan) is negatively correlated with GDP per capita as expected. Political instability, economic crisis, wars and polarization in the society have no significant impact on GDP per capita in the long run.
Resumo:
The FE ('fixed effects') estimator of technical inefficiency performs poorly when N ('number of firms') is large and T ('number of time observations') is small. We propose estimators of both the firm effects and the inefficiencies, which have small sample gains compared to the traditional FE estimator. The estimators are based on nonparametric kernel regression of unordered variables, which includes the FE estimator as a special case. In terms of global conditional MSE ('mean square error') criterions, it is proved that there are kernel estimators which are efficient to the FE estimators of firm effects and inefficiencies, in finite samples. Monte Carlo simulations supports our theoretical findings and in an empirical example it is shown how the traditional FE estimator and the proposed kernel FE estimator lead to very different conclusions about inefficiency of Indonesian rice farmers.
Resumo:
While the simulation of flood risks originating from the overtopping of river banks is well covered within continuously evaluated programs to improve flood protection measures, flash flooding is not. Flash floods are triggered by short, local thunderstorm cells with high precipitation intensities. Small catchments have short response times and flow paths and convective thunder cells may result in potential flooding of endangered settlements. Assessing local flooding and pathways of flood requires a detailed hydraulic simulation of the surface runoff. Hydrological models usually do not incorporate surface runoff at this detailedness but rather empirical equations are applied for runoff detention. In return 2D hydrodynamic models usually do not allow distributed rainfall as input nor are any types of soil/surface interaction implemented as in hydrological models. Considering several cases of local flash flooding during the last years the issue emerged for practical reasons but as well as research topics to closing the model gap between distributed rainfall and distributed runoff formation. Therefore, a 2D hydrodynamic model, depth-averaged flow equations using the finite volume discretization, was extended to accept direct rainfall enabling to simulate the associated runoff formation. The model itself is used as numerical engine, rainfall is introduced via the modification of waterlevels at fixed time intervals. The paper not only deals with the general application of the software, but intends to test the numerical stability and reliability of simulation results. The performed tests are made using different artificial as well as measured rainfall series as input. Key parameters of the simulation such as losses, roughness or time intervals for water level manipulations are tested regarding their impact on the stability.
Resumo:
o objetivo deste trabalho é a análise de barragens de gravidade de concreto desde a faseda sua construção até sua completa entrada em serviço. Inicialmente é feita a análise da fase construtiva, onde o problema fundamental é devido às tensões térmicas decorrentes do calor de hidratação. O método dos elementos finitos é empregado para a solução dos problemasde transferência de calor e de tensões. A influência da construção em camadas é introduzidaatravés da redefinição da malha de elementos finitos, logo após o lançamento de cadacamada de concreto. Uma atenção especial é dada ao problema de fissuração em estruturas de concreto simples.Algunsmodelos usuais são apresentados, discutindo-se a eficiência dos mesmos. Os modelosde fissuração distribuída têm sido preferidos, em virtude dos vários inconvenientes apresentados pelas formulações discretas. Esses modelos, entretanto, fornecem resultados dependentesda malha de elementos finitos e alguma consideração adicional deve ser feita para corrigiressas distorções. Normalmente, tenta-se corrigir esse problema através da adoção de umaresistênciaà tração minorada que é definida em função da energia de fratura do material. Neste trabalho, é demonstrado que esse procedimento não é satisfatório e é proposta uma novaformulaçãopara a análise de grandes estruturas de concreto. A análise das tensões na etapa de construção da barragem é feita com o emprego de um modelo constitutivo viscoelástico com envelhecimento para o concreto. Em virtude do envelhecimento,a matriz de rigidez da estrutura é variável no tempo, devendo ser redefinida e triangularizadaem cada instante. Isto leva a um grande esforço computacional, sobretudo, quandoa barragem é construída em muitas camadas. Para evitar esse inconveniente, adota-se um procedimento iterativo que permite que a matriz de rigidez seja redefinida em poucas idadesde referência. Numa segunda etapa da análise, a barragem é submetida à pressão hidrostática e a uma excitação sísmica. A análise dinâmica é realizada considerando-se o movimento do sistema acoplado barragem-reservatório-fundação. O sismo é considerado um processo estocásticonão estacionário e a segurança da estrutura é determinada em relação aos principais modos de falha
Resumo:
Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the serial-correlation ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns, does not depend on any parametric function representing preferences, is suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles, and can be a basis to construct an estimator of the risk-free rate. For post-war data, our estimator is close to unity most of the time, yielding an average annual real discount rate of 2.46%. In formal testing, we cannot reject standard preference speciÖcations used in the literature and estimates of the relative risk-aversion coe¢ cient are between 1 and 2, and statistically equal to unity. Using our SDF estimator, we found little signs of the equity-premium puzzle for the U.S.
Resumo:
We combine general equilibrium theory and théorie générale of stochastic processes to derive structural results about equilibrium state prices.
Resumo:
This paper develops nonparametric tests of independence between two stationary stochastic processes. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, I take advantage of a generalized entropic measure so as to build a class of nonparametric tests of independence. Asymptotic normality and local power are derived using the functional delta method for kernels, whereas finite sample properties are investigated through Monte Carlo simulations.
Resumo:
In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.
Resumo:
The past decade has wítenessed a series of (well accepted and defined) financial crises periods in the world economy. Most of these events aI,"e country specific and eventually spreaded out across neighbor countries, with the concept of vicinity extrapolating the geographic maps and entering the contagion maps. Unfortunately, what contagion represents and how to measure it are still unanswered questions. In this article we measure the transmission of shocks by cross-market correlation\ coefficients following Forbes and Rigobon's (2000) notion of shift-contagion,. Our main contribution relies upon the use of traditional factor model techniques combined with stochastic volatility mo deIs to study the dependence among Latin American stock price indexes and the North American indexo More specifically, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. From a theoretical perspective, we improve currently available methodology by allowing the factor loadings, in the factor model structure, to have a time-varying structure and to capture changes in the series' weights over time. By doing this, we believe that changes and interventions experienced by those five countries are well accommodated by our models which learns and adapts reasonably fast to those economic and idiosyncratic shocks. We empirically show that the time varying covariance structure can be modeled by one or two common factors and that some sort of contagion is present in most of the series' covariances during periods of economical instability, or crisis. Open issues on real time implementation and natural model comparisons are thoroughly discussed.
Resumo:
A literatura em franchising tem virtualmente ignorado o papel de aspectos psicologicos nos resultados interorganizacionais das empresas, a despeito de sua influencia nos resultados das organizações e da qualidade de relacionamento. Este estudo, portanto, tem por objetivo analisar a influência da personalidade e do potencial empreendedor na qualidade de relacionamento e desempenho financeiro na relação franqueador-franqueado, ao longo do tempo, sob a perspectiva dos franqueados. Este estudo analisa também o papel do tempo de relacionamento sobre a qualidade de relacionamento e o desempenho financeiro. Foi utilizado neste estudo um questionário de auto-preenchimento, enviado por e-mail, com o objetivo de recolher dados de uma amostra de 342 franqueados de 3 redes de franquias. A personalidade foi mensurada por meio dos “Cinco Grandes” traços de personalidade (escalas IPIP-B5): extroversão, agradabilidade, consciencia, estabilidade emocional e imaginação. O potencial empreendedor foi mensurado por meio do índice CEI (Carland Entrepreneurship Index). A qualidade do relacionamento foi estruturada como um constructo de segunda ordem, composto por 23 itens (incorporando confiança, comprometimento e satisfação com o relacionamento), e o desempenho financeiro foi representado por meio de uma escala de mensuração de crescimento de vendas e de rentabilidade. O tempo de relacionamento foi medido por meio dos meses de relacionamento entre franqueado e franqueador. As hipoteses foram testadas por meio de modelagem por equações estruturais, com a utilização do método de mínimos quadrados parciais (PLS), análise de regressão e análise de médias. Três das cinco dimensões da personalidade apresentaram o efeito previsto sobre as variáveis qualidade do relacionamento – agradabilidade (positivamente), estabilidade emocional (positivamente), e imaginação (positivamente). O desempenho financeiro foi influenciado, como previsto por consciência (positivamente), estabilidade emocional (positivamente), e imaginação (positivamente). Como esperado, a qualidade do relacionamento apresentou efeito positivo e significativo em relação ao desempenho financeiro. O potencial empreendedor apresentou o efeito positivo previsto apenas sobre desempenho. O tempo de relacionamento teve o efeito positivo esperado sobre o relacionamento franqueador-franqueado, em relação à qualidade do relacionamento e o desempenho financeiro, mas as diferenças entre as fases de relacionamento propostas foram apenas parcialmente confirmadas, uma vez que em somente duas fases (rotina e estabilização) a análise de médias mostrou diferenças significativas. Os resultados indicam que a personalidade influencia a qualidade de relacionamento e o desempenho, mas a meneira pela qual isso ocorre é diferente no contexto brasileiro, onde esta pesquisa foi realizada, dos achados da pesquisa conduzida na Austrália, sugerindo que fatores como cultura e estabilidade de mercado podem ter influencia sobre a relação entre traços de personalidade e qualidade de relacionamento, e traços de personalidade e desempenho financeiro. O potencial empreendedor parece influenciar positivamente o desempenho do franqueado, mas a sua influência não foi significativa em relação à qualidade do relacionamento. Os resultados também indicam a importância do tempo no desenvolvimento da qualidade de relacionamento e desempenho. Além disso, os relacionamentos de longo prazo estão relacionados a melhores avaliações de qualidade de relacionamento e desempenho financeiros por parte dos franqueados. As limitações do trabalho e sugestões para estudos futuros também são discutidos.
Resumo:
Trabalho apresentado no XXXV CNMAC, Natal-RN, 2014.
Resumo:
We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.
Resumo:
We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than “standard” confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a small simulation study illustrating the numerical behavior of the proposed bounds.